I’m liking what I see, good design principles, aiming for simplicity and reviewability. I would not be surprised to see this become a huge success as people eschew the extra functionality of OpenSSL (and others) in favor of software with less security risk. Amazon’s backing is likely to propel s2n over other implementations with similar goals.
When using passenger with an app deployed to a sub-uri most details of app paths are handled automatically. However, when deploying an app the precompiled assets using helpers such as asset_path will get the wrong path because it doesn’t know about the sub-uri. The solution is to set Rails.application.config.action_controller.relative_url_root in an initializer or environment to the sub-uri path. I learned this from the font-awesome rails documentation but this is a common issue as I found a lot of people running into similar problems wherever assets where being precompiled.
- font-awesome-rails issue 74
- font-awesome-rails issue 402
- jquery-ui-rails issue 54
- rails issue 3365
- rails issue 8941
Be clear this is:
Also this can be accomplished by setting the environmental variable RAILS_RELATIVE_URL_ROOT but I think that will usually be more work but maybe there are instances where that is the better solution.
It would be wise to include this in the passenger documentation for deploying to a sub-uri as this is where most people could be made aware of the importance of this detail and how not addressing it could lead to problems later.
As far as I’m concerned the decision by the Ubuntu Team to default to failing to boot when a raid array is degrade is a complete mistake. When your decision is the opposite of what everyone else has chosen to do in a given situation, you’re probably doing it wrong. Having an option for people to choose to fail to boot on a degraded array is a great idea and one that would probably almost never be used. But I’m all for options, and also for sensible defaults. So on any Ubuntu system where you’re running mdadm raid don’t forget to do either:
sudo dpkg-reconfigure mdadm
- update /etc/initramfs-tools/conf.d/mdadm and set BOOT_DEGRADED=true
sudo update-initramfs -u -k all
Every Ubuntu system with mdadm RAID, every time.
When to use which:
- size and length both call RHASH_SIZE which is O(1)
- count is enumerable.count which is O(n) – use only with block
- best practice: length without block, count with block
- size is an alias for length
- length calls RARRAY_LEN which is O(1)
- count is O(n) – use only with block
- best practice: length without block, count with block
- count and size create a COUNT query which is often faster than alternative
- length creates an array and calls length which is usually slower
- best practice: count
So FreeTDS is the glue between a linux systems and SQL Server, among other things. For my purposes I’m mainly using it to run Rails with a SQL Server backend. My understanding is that the alternatives to FreeTDS are Microsoft’s SQL Server ODBC Driver 1.0 for Linux or, if using JRuby, Microsoft’s JDBC Driver for SQL Server. But why am I even looking at alternatives? Well, FreeTDS has a performance problem when inserting large numbers of records. This bug may not be universal, which is to say it might only appear in certain contexts, but it is significant. Basically when bulk inserting records performance is abysmally slow. I’ve seen this within Rails which made me think maybe the problem was in TinyTDS or ActiveRecord SQL Server Adapter. However, I’ve noticed the problem with FreeTDS binaries tsq, bsqldb, and fisql. That makes me think the problem is in FreeTDS, at least the stable 0.91 release which was released in August 2011 though has been patched subsequently. It’s possible that the current 0.95 version will resolve these problems but I have not yet tested it. There is one binary not affected by this problem and it is freebcp. However, I have been finding that freebcp has it’s own bugs/quirks/idiosyncrasies or perhaps is exposing those from the underlying freetds code. In any case, freebcp is not a great solution, but for bulk data transfer from linux to SQL Server it seems to be the only game in town.
I’ve noted how difficult it is to get a complete list of parameters that can be used in the Rails’ config.database.yml file. I understand this is because there are different parameters for the different library layers. Still, I would like a more complete list somewhere, especially given that the parameter names can be confusingly similar but different between adapters (eg: timeout, connection_timeout, login_timeout, connect_timeout) For PG the disparate parameters can be found here:
- ActiveRecord::ConnectionAdapters::ConnectionPool – as far as I know this is consistent across all ActiveRecord adapters. (connection_timeout)
- libpq Parameter Key Words – (connect_timeout)
I may create similar list for SQL Server.
This used to be a bigger issue but now thanks to the signed drivers there’s very little more to do than download the drivers in Windows and install them:
I did my time with rsruby, rinruby and Rserve-Ruby-client. In retrospect I should have trusted the Rserve-Ruby-client readme which details the problems with rsruby and rinruby.
- rsruby – dealbreaker is that this is not stable, there are a few other downsides including complex data conversions and compilation issues but enough said.
- rinruby – slow, but more importantly fails when assigning large data making it pretty much useless. See Bug #2 and Bug #13.
Rserver can be installed on Debian/Ubuntu with:
apt-get install r-cran-rserve
Be aware that there is a bug in the Ubuntu package that requires fixing for rserve to work:
Cleaning up some systems and looking for packages that are no longer maintained.
apt-show-versions | grep 'No available version in archive'
I was amazed at some of the old packages on the systems, like libraries related to Gnome 2. This catches things that other methods like deborphan won’t.
I had to do this recently for a project. The difficult piece was sorting through the opinions of what the best tools and procedures for handling this migration was. So I tried a few until I found one that worked so well, so quickly that I decided it was worth sharing. Firs tthings I used that didn't work well and/or quickly and/or intuitively:
- mysqldump with various options and various tweaks/scripts/editing of the dump file
What did work well was py-mysql2pgsql. Once installed it was easy to set the configuration file options and run it. It worked without problem and I would use it again for this task. I can't comment as to whether it will handle all cases, this project didn't contain anything too fancy, but I would recommend it as a place to start: