I have never been in this position in my life. I have been an avid computer user since my family’s first 4.77 MHz 8088 PC circa 1986. This is the first time I’ve ever actively recommended people not buy a computer. The reason is that this is the first time all major microprocessors on the market have serious flaws that I believe should be resolved before purchasing. And that’s not the only issue.
It is interesting how almost all coverage refers to “Spectre and Meltdown” instead of “Meltdown and Spectre”. Meltdown is by far the more serious vulnerability and it affects all Intel microprocessors on the market and most Intel microprocessors produced since 2011 (possibly many from as early as 1995). Mitigations to this attack will likely reduce computer performance. This is not the end of the world but it is understandable that customers would be troubled both by the possibility of undetectable security failures and by the loss of performance to correct security. However, if this was all there was to the story I’d probably tell people to wait until the patches are in place and then resume buying. But I can’t.
The CEO of Intel, Brian Krzanich, after learning of these vulnerabilities immediately sold as much stock in the company as he could. That is insider trading. Given months to come up with a solution the company failed. Once the vulnerabilities were revealed Intel actively downplayed the significance of the vulnerabilities and engaged in a campaigned of misinformation to the public. This is a corporate culture that I cannot support. I hope Intel is able to find a path back to integrity, but I will not hold by breath. And I will not support a company with such an unethical business culture.
This should have been great news for AMD. Their microprocessors are not affected by Meltdown. However there have been reports of system instability for certain tasks on AMD microprocessors. This information has been difficult to pin down and it has been reported that the problem is fixed and that if a person encounters the problem that AMD will replace the chip with one that is unaffected. This is a good stance for a business to take. I suspect an AMD Ryzen system may be my next computer purchase.
I believe the rise of so many new programming languages in the past few years is in response to a reconsideration and reflection of what people like and dislike about current programming languages and not yet finding a language that is as great as they imagine a language could be. I suspect someone could write a book on this topic but I just wanted to document a few thoughts and notes.
- There is a renewed interest in performance (execution speed) and memory conservation. See: Rust, Go, Julia, Nim, Swift.
- Focus on safety and concurrency. See Rust, Swift, Go.
- More features from functional programming languages are being adopted to reduce side-effects such as immutable variables (at least optionally or by default as in Rust).
- “Composition over inheritance” – See Rust (no inheritance, all composition), Go (no inheritance, all composition), Julia
The languages being explored, not all are new:
- Elixer (purely functional)
- Clojure (purely functional)
The only thing I find missing is a focus on expressiveness. I would like to see some benchmarks that includes expressiveness as a metric. I really like Ruby because of it’s expressiveness. I am not a fan of purely functional languages but I like the idea of making functional concepts the default with optional ways to override when necessary, as Rust uses. I may try to come up with a benchmark to illustrate this issue. Mostly I think of this as a problem with large (think approximately 1GB in size) immutable arrays undergoing calculations. Plus, for loops with a variable for the current index or value is a pretty handy feature. I don’t prefer recursion as an alternative.
Like many developers I come across Stack Overflow pages with some frequency when performing research. I came across a question to which I had an answer and I decided to was time to try and contribute. Well, my answer was deleted, and then the question was deleted, and at no time did I get any information or feedback as to what was happening or why. This is poor user interface design. Though it may take some effort, it should be clear to users what is happening and why when they attempt to use a site. It only took one google query to find that my experience is hardly unique:
I wonder if most websites simply have a lifespan that plays itself out. This could be good as it would suggest that there is an evolutionary aspect to software. Stack Overflow successfully displaced the sites that came before it by doing what it does better, and something will come along and replace Stack Overflow and the sites occupying that niche now when a better design and implementation are presented. Maybe that next thing will work better for me. Or maybe this niche is just not something that I will interact with, and that’s fine too.
I’ve disliked the NetworkManager dnsmasq integration for some time. As far as I understand it has caching turned off by default so I’m not sure what the rationale is at all. So disabling means 1 less system process and less magic on the system (since resolv.conf will reflect the actual nameserver settings rather than be overwritten by NetworkManager). The key to disabling this is to open:
Comment or remove the line:
sudo /etc/init.d/network-manager restart
sudo killall dnsmasq
I’ve used dnsmasq for many years, more than a decade even, and I have no complaints. I think it works well as a lightweight DNS forwarding and caching server. But I have also been experimenting with unbound for a couple of years and recently decided to switch over my home network to use it. This is on an Ubuntu 12.04 server but it was pretty easy:
apt-get remove --purge dnsmasq
apt-get install unbound ldnsutils
Ensure DNSSEC is working (could use dig (dnsutils package) instead of drill):
drill com. SOA +dnssec | grep flags
drill sigfail.verteiltesysteme.net | grep SERVFAIL
drill sigok.verteiltesysteme.net | grep NOERROR
Here’s the config options I added to unbound.conf, other than settings specific to my network. I am testing with all harden options enabled to see if there’s any problems but YMMV:
To get the root hints I created a script in cron.monthly that looks like this:
/usr/bin/curl -sS -o /etc/unbound/root.hints https://www.internic.net/domain/named.cache
So no root hints will be updated every month. The only other thing I may consider at some point is running in chroot but for now this was pretty quick and easy. Also to look at the stats:
watch unbound-control stats_noreset
I’ve never been a big fan of rsyslog, preferring syslog-ng instead. However, I am giving rsyslog another chance. One of my biggest problems with rsyslog has been it’s high memory usage, documented by many people in many complaints that can be found using google. Now this emmory is usually, though not always, virtual and not resident. That means rsyslog is not actually hogging memory but merely causes reporting problems when searching for processes with high memory usage as it will often show up. However, I’m trying a new solution that I hope will improve the situation. From the rsyslog wiki:
rsylsog is a (potentially massively) multi-threaded syslogd. Each of the threads requires a runtime stack. Rsyslog uses no specific stack allocation and sticks with the OS default. Seen in practice have stack allocations of 8 to 10 MB per thread. In a process trace, this can look like a memory leak.
—Reducing memory usage
So how big is the default stack for rsyslog:
cat /proc/`pidof rsyslogd`/limits
On all of the 64-bit Ubuntu systems I’ve tested the answer is:
Max stack size 8388608 unlimited bytes
So 8MB. The easiest fix I discovered is this:
Edit /etc/default/rsyslog and add the line:
ulimit -S -s 128
This sets the soft limit for stack size to 128K. Then restart:
cat /proc/`pidof rsyslogd`/limits
Max stack size 131072 unlimited bytes
I’m interested in emulators, always have been, and I was wondering if anyone had used LLVM to write an emulator. I came across this highly detailed account which was a fun read:
Statically Recompiling NES Games into Native Executables with LLVM and Go
It concludes stating that static recompilation is probably less practical than JIT, which I think is the correct conclusion. Still, nice to have someone go through the effort of building something just to show how it can be done.
I’m looking at keyboards as my BTC 6100C is starting to develop some key issues. I’ve been pretty happy with the 6100C but it’s not longer made and BTC is not making any wired compact keyboards at this time. Here’s what I’ve found:
- A4Tech KL-5 and variants ($40) – these are nearly identical to the BTC 6100C so probably have the same manufacturer. However the A4Tech removes the volume control buttons and the other hotkeys are not particularly useful without remapping. I’m considering it a fallback option is I don’t like the feel of other options.
- Genius LuxeMate i200 ($19) – I like the layout and I like the fact that it actually has a zoomable picture on amazon to look at the layout details closely.
- GearHead – 89-Key Mini USB Windows® Keyboard ($15 plus shipping) – Looks like a very similar layout to what I have but nothing in the description sells it over other optins.
- SIIG JK-US0312-S1 ($22) – Better hotkeys and similar layout to the BTC 6100C.
- GMYLE® Ultra Thin Wired USB Mini Keyboard – Only 78 keys, thought Fn + directions for Page Up/Page Down/Home/End makes a lot of sense.
- Boxcave 78 Key Wired USB Mini Slim Keyboard ($20) – identical to GMYLE
- Perixx PERIBOARD-407B
I’m liking what I see, good design principles, aiming for simplicity and reviewability. I would not be surprised to see this become a huge success as people eschew the extra functionality of OpenSSL (and others) in favor of software with less security risk. Amazon’s backing is likely to propel s2n over other implementations with similar goals.
When using passenger with an app deployed to a sub-uri most details of app paths are handled automatically. However, when deploying an app the precompiled assets using helpers such as asset_path will get the wrong path because it doesn’t know about the sub-uri. The solution is to set Rails.application.config.action_controller.relative_url_root in an initializer or environment to the sub-uri path. I learned this from the font-awesome rails documentation but this is a common issue as I found a lot of people running into similar problems wherever assets where being precompiled.
Be clear this is:
Also this can be accomplished by setting the environmental variable RAILS_RELATIVE_URL_ROOT but I think that will usually be more work but maybe there are instances where that is the better solution.
It would be wise to include this in the passenger documentation for deploying to a sub-uri as this is where most people could be made aware of the importance of this detail and how not addressing it could lead to problems later.