Monday, October 6, 2014

InfluxDB on Ubuntu: Too many open files

I had InfluxDB (a great time-series datastore) running on Ubuntu, and kept running into these errors:

IO error: /opt/influxdb/shared/data/db/shard_db_v2/00031/356947.ldb: Too many open files

The issue was that the Ubuntu default for open files is 1024:

root@server:~# ulimit -Sn
1024

There were two steps required to fix this issue with InfluxDB.

  • Set the open file limit system wide
    • Edit /etc/security/limits.conf and add the following:
    • * soft nofile 100000
    • * hard nofile 100000
This raised the limits, but when the InfluxDB process started, it was still getting the old limits:


iclark@gar-network-datastore:~$ cat /proc/992/limits
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        0                    unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             47573                47573                processes
Max open files            1024                 4096                 files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       47573                47573                signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us

  • To fix this, I had to set the ulimit value in the influxdb init script
    • Edit /etc/init.d/influxdb and add the following to the beginning of the script:
    • ulimit -n 100000
After this value was added, I just stopped and started the InfluxDB process, and voila:

root@gar-network-datastore:~# cat /proc/1279/limits
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        0                    unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             47573                47573                processes
Max open files            100000               100000               files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       47573                47573                signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us

Friday, September 26, 2014

Twitter typeahead.js with BootStrap 3, AJAX, and CoffeeScript

I ran into a few problems today while deploying Twitter's typeahead.js.  I had dynamic typeahead data that I needed to pull from a URL (like an AJAX request), and I also needed to integrate typeahead with Bootstrap 3 (which breaks almost all of TypeAhead's CSS, and screws up bootstrap's form layout).  I also wanted to use BloodHound but have it search any part of the word (not just the beginning of the word), and have typeahead.js submit the form when an item was selected by clicking.  This project's JavaScript is actually written as CoffeeScript, so the last thing I had to do was convert working JavaScript to CoffeeScript.

The first thing I sorted out was the CSS.  Adding the following corrected all of my CSS issues (note: not all of this is CSS I wrote; much of it was pulled from various other BootStrap/typeahead solutions online, but it is tweaked as none of those worked 100% for me):

/* Twitter typeahead compatibility fixes */
.twitter-typeahead {
  float: left;
  margin-right: 3px;
}

.tt-suggestion {
  display: block;
  padding: 3px 20px;
}

.twitter-typeahead .tt-hint {
  color:#a1a1a1;
  padding: 6px 12px;
  border:1px solid transparent;
}

.twitter-typeahead .tt-query {
  border-radius: 4px!important;
  border-top-right-radius: 0!important;
  border-bottom-right-radius: 0!important;
}

.tt-dropdown-menu {
  min-width: 160px;
  margin-top: 2px;
  padding: 5px 0;
  background-color: #fff;
  border: 1px solid #ccc;
  border: 1px solid rgba(0,0,0,.2);
  *border-right-width: 2px;
  *border-bottom-width: 2px;
  -webkit-border-radius: 6px;
  -moz-border-radius: 6px;
  border-radius: 6px;
  -webkit-box-shadow: 0 5px 10px rgba(0,0,0,.2);
  -moz-box-shadow: 0 5px 10px rgba(0,0,0,.2);
  box-shadow: 0 5px 10px rgba(0,0,0,.2);
  -webkit-background-clip: padding-box;
  -moz-background-clip: padding;
  background-clip: padding-box;
}

.tt-cursor {
  cursor: pointer;
  color: #fff;
  background-color: #0081c2;
  background-image: -moz-linear-gradient(top, #0088cc, #0077b3);
  background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#0088cc), to(#0077b3));
  background-image: -webkit-linear-gradient(top, #0088cc, #0077b3);
  background-image: -o-linear-gradient(top, #0088cc, #0077b3);
  background-image: linear-gradient(to bottom, #0088cc, #0077b3);
  background-repeat: repeat-x;
  filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff0088cc', endColorstr='#ff0077b3', GradientType=0)
}

.tt-suggestion.tt-is-under-cursor a {
  color: #fff;
}

.tt-suggestion p {
  margin: 0;
}
After my form was looking normal again, I had to get typeahead working. After a lot of trial and error and converting it to CoffeeScript, here is what I had (my API URL was at /v1/devices/list). Another note, not all of this is my original code, it is bits and pieces of javascript from around the web which I cobbled together and converted to CoffeeScript:

typeahead = ->
  devices = new Bloodhound({
    datumTokenizer: (d) ->
      test = Bloodhound.tokenizers.whitespace(d.value)
      $.each(test, (k,v) ->
        i = 0
        while( (i+1) < v.length )
          test.push(v.substr(i,v.length))
          i++
      )
      return test
    ,
    queryTokenizer: Bloodhound.tokenizers.whitespace,
    limit: 10,
    prefetch: {
      url: '/v1/devices/list',
      filter: (list) ->
        $.map(list, (device) -> { value: device })
    }
  })

  # kicks off the loading/processing of `local` and `prefetch`
  devices.clearPrefetchCache()
  devices.initialize()


  # passing in `null` for the `options` arguments will result in the default
  # options being used
  $('.typeahead').typeahead(
    { 
      highlight: true,
    },
    { 
      name: 'devices',
      # `ttAdapter` wraps the suggestion engine in an adapter that
      # is compatible with the typeahead jQuery plugin
      source: devices.ttAdapter()
    },
  )
  $('input.typeahead').bind("typeahead:selected", -> $("form").submit() )

I hope this helps someone else out there!

Thursday, June 26, 2014

Ruby on Rails gem: Uninitialized Constant (class name)

I was working on a Ruby on Rails project earlier, and ran into a really frustrating issue regarding gems.

I added the influxdb gem to my Gemfile, did a bundle install, saw it installed successfully, then tried to use it in an ApplicationController.  No luck!  Over and over it was acting like the gem wasn't installed (Uninitialized Constant error ApplicationController::InfluxDB when trying to reference the class defined in the gem).

When I ran the same code using the rails console, it worked fine, which lead me to believe it was some problem with bundler or the environments.

It turns out the solution was simply to restart the apache/ngnix/whatever server! "service apache2 restart" in my case. Whew!

Saturday, March 22, 2014

CrashPlan and JunOS Pulse conflict

I spent an hour or so today trying to get CrashPlan working on my linux machine.  The software installed just fine, and the backup engine appeared to start fine as well:

$ sudo service crashplan start    
Starting CrashPlan Engine ... Using standard startup
OK

However, the desktop portion of the CrashPlan software kept saying that it was "Unable to connect to the backup engine, retry?".  Weird.  So I checked to see if the server was actually listening on the port it's supposed to be listening on:

$ netstat -ln | grep 4243
$

Nope.  So it must not be starting up as "OK" as it claims.  Digging through some of the logs (specifically /usr/local/crashplan/log/service.log.0) I found this gem that shows up just before the service automatically shuts down:

[03.22.14 22:05:04.511 WARN    main                 com.backup42.service.CPService          ] >>>>> CPService is already listening on 0.0.0.0:4242 <<<<<

Wait, wat?  Something (it thinks it's itself, but that's not the case) is already bound to port 4242.  Netstat shows who actually is bound to that port:

$ sudo netstat -anp | grep 4242 
tcp  0 0 127.0.0.1:4242  0.0.0.0:*        LISTEN      11466/ncsvc
tcp  0 0 127.0.0.1:45982 127.0.0.1:4242   ESTABLISHED 11466/ncsvc
tcp  0 0 127.0.0.1:4242  127.0.0.1:45982  ESTABLISHED 11466/ncsvc

ncsvc!  That's the Juniper VPN software (aka JunOS Pulse).  Well that's no good, I'm almost always connected to the VPN.  Luckily, CrashPlan lets you configure the port that the server uses to something other than 4242!

The fix that made it work happily ever after was simply modifying a line in /usr/local/crashplan/conf/my.service.xml:

Change:
 <location>0.0.0.0:4242</location>
To a port that isn't already in use:
 <location>0.0.0.0:4244</location>

Now the CrashPlanEngine portion of the software runs smoothly even when I'm on the VPN!