Like many developers I come across Stack Overflow pages with some frequency when performing research. I came across a question to which I had an answer and I decided to was time to try and contribute. Well, my answer was deleted, and then the question was deleted, and at no time did I get any information or feedback as to what was happening or why. This is poor user interface design. Though it may take some effort, it should be clear to users what is happening and why when they attempt to use a site. It only took one google query to find that my experience is hardly unique:
I wonder if most websites simply have a lifespan that plays itself out. This could be good as it would suggest that there is an evolutionary aspect to software. Stack Overflow successfully displaced the sites that came before it by doing what it does better, and something will come along and replace Stack Overflow and the sites occupying that niche now when a better design and implementation are presented. Maybe that next thing will work better for me. Or maybe this niche is just not something that I will interact with, and that’s fine too.
I’ve disliked the NetworkManager dnsmasq integration for some time. As far as I understand it has caching turned off by default so I’m not sure what the rationale is at all. So disabling means 1 less system process and less magic on the system (since resolv.conf will reflect the actual nameserver settings rather than be overwritten by NetworkManager). The key to disabling this is to open:
Comment or remove the line:
sudo /etc/init.d/network-manager restart
sudo killall dnsmasq
I’ve used dnsmasq for many years, more than a decade even, and I have no complaints. I think it works well as a lightweight DNS forwarding and caching server. But I have also been experimenting with unbound for a couple of years and recently decided to switch over my home network to use it. This is on an Ubuntu 12.04 server but it was pretty easy:
apt-get remove --purge dnsmasq
apt-get install unbound ldnsutils
Ensure DNSSEC is working (could use dig (dnsutils package) instead of drill):
drill com. SOA +dnssec | grep flags
drill sigfail.verteiltesysteme.net | grep SERVFAIL
drill sigok.verteiltesysteme.net | grep NOERROR
Here’s the config options I added to unbound.conf, other than settings specific to my network. I am testing with all harden options enabled to see if there’s any problems but YMMV:
To get the root hints I created a script in cron.monthly that looks like this:
/usr/bin/curl -sS -o /etc/unbound/root.hints https://www.internic.net/domain/named.cache
So no root hints will be updated every month. The only other thing I may consider at some point is running in chroot but for now this was pretty quick and easy. Also to look at the stats:
watch unbound-control stats_noreset
I’ve never been a big fan of rsyslog, preferring syslog-ng instead. However, I am giving rsyslog another chance. One of my biggest problems with rsyslog has been it’s high memory usage, documented by many people in many complaints that can be found using google. Now this emmory is usually, though not always, virtual and not resident. That means rsyslog is not actually hogging memory but merely causes reporting problems when searching for processes with high memory usage as it will often show up. However, I’m trying a new solution that I hope will improve the situation. From the rsyslog wiki:
rsylsog is a (potentially massively) multi-threaded syslogd. Each of the threads requires a runtime stack. Rsyslog uses no specific stack allocation and sticks with the OS default. Seen in practice have stack allocations of 8 to 10 MB per thread. In a process trace, this can look like a memory leak.
—Reducing memory usage
So how big is the default stack for rsyslog:
cat /proc/`pidof rsyslogd`/limits
On all of the 64-bit Ubuntu systems I’ve tested the answer is:
Max stack size 8388608 unlimited bytes
So 8MB. The easiest fix I discovered is this:
Edit /etc/default/rsyslog and add the line:
ulimit -S -s 128
This sets the soft limit for stack size to 128K. Then restart:
cat /proc/`pidof rsyslogd`/limits
Max stack size 131072 unlimited bytes
I’m interested in emulators, always have been, and I was wondering if anyone had used LLVM to write an emulator. I came across this highly detailed account which was a fun read:
Statically Recompiling NES Games into Native Executables with LLVM and Go
It concludes stating that static recompilation is probably less practical than JIT, which I think is the correct conclusion. Still, nice to have someone go through the effort of building something just to show how it can be done.
I’m liking what I see, good design principles, aiming for simplicity and reviewability. I would not be surprised to see this become a huge success as people eschew the extra functionality of OpenSSL (and others) in favor of software with less security risk. Amazon’s backing is likely to propel s2n over other implementations with similar goals.
When using passenger with an app deployed to a sub-uri most details of app paths are handled automatically. However, when deploying an app the precompiled assets using helpers such as asset_path will get the wrong path because it doesn’t know about the sub-uri. The solution is to set Rails.application.config.action_controller.relative_url_root in an initializer or environment to the sub-uri path. I learned this from the font-awesome rails documentation but this is a common issue as I found a lot of people running into similar problems wherever assets where being precompiled.
Be clear this is:
Also this can be accomplished by setting the environmental variable RAILS_RELATIVE_URL_ROOT but I think that will usually be more work but maybe there are instances where that is the better solution.
It would be wise to include this in the passenger documentation for deploying to a sub-uri as this is where most people could be made aware of the importance of this detail and how not addressing it could lead to problems later.
As far as I’m concerned the decision by the Ubuntu Team to default to failing to boot when a raid array is degrade is a complete mistake. When your decision is the opposite of what everyone else has chosen to do in a given situation, you’re probably doing it wrong. Having an option for people to choose to fail to boot on a degraded array is a great idea and one that would probably almost never be used. But I’m all for options, and also for sensible defaults. So on any Ubuntu system where you’re running mdadm raid don’t forget to do either:
sudo dpkg-reconfigure mdadm
- update /etc/initramfs-tools/conf.d/mdadm and set BOOT_DEGRADED=true
sudo update-initramfs -u -k all
Every Ubuntu system with mdadm RAID, every time.
So FreeTDS is the glue between a linux systems and SQL Server, among other things. For my purposes I’m mainly using it to run Rails with a SQL Server backend. My understanding is that the alternatives to FreeTDS are Microsoft’s SQL Server ODBC Driver 1.0 for Linux or, if using JRuby, Microsoft’s JDBC Driver for SQL Server. But why am I even looking at alternatives? Well, FreeTDS has a performance problem when inserting large numbers of records. This bug may not be universal, which is to say it might only appear in certain contexts, but it is significant. Basically when bulk inserting records performance is abysmally slow. I’ve seen this within Rails which made me think maybe the problem was in TinyTDS or ActiveRecord SQL Server Adapter. However, I’ve noticed the problem with FreeTDS binaries tsq, bsqldb, and fisql. That makes me think the problem is in FreeTDS, at least the stable 0.91 release which was released in August 2011 though has been patched subsequently. It’s possible that the current 0.95 version will resolve these problems but I have not yet tested it. There is one binary not affected by this problem and it is freebcp. However, I have been finding that freebcp has it’s own bugs/quirks/idiosyncrasies or perhaps is exposing those from the underlying freetds code. In any case, freebcp is not a great solution, but for bulk data transfer from linux to SQL Server it seems to be the only game in town.