List of all CSS color properties

I couldn’t find a list, so here it is –


Note that this changes over time as new features are added, and by browser, and potentially what types of DOM elements you use, so you may want to look at getting a list of all CSS color properties.

Getting a list of all CSS color properties with jQuery

The list of CSS properties available is always changing, so the best way to get the list of values is by querying the brwoser.

In this technique, we grab all DOM elements on the page, get each CSS style, and filter to the list containing the word “color”, which gives us a pretty good list:

var elem = $('*')
function getStyles(i, v) { 
  var s = window.getComputedStyle(v); 
  var keys = []; 
  for (var i = 0; i < s.length; i++) { 
    keys[keys.length] = s.item(i); 
  return keys; 
var props =
        function(i, v) { 
          return v.indexOf('color') > 0 

Using AppSettings in AppHarbor to keep connection strings out of your git repository

When you generate a new entity framework project in C#, it generates an Application context object, which is very insistent on reading it’s connection from the connection strings settings of the web.config:

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>
  public ApplicationDbContext()
    : base("DefaultConnection", throwIfV1Schema: false)
  public static ApplicationDbContext Create()
    return new ApplicationDbContext();

Ideally you don’t want to check in the connection string. If you’re using AppHarbor, their UI helpfully lets you add AppSettings, but it takes some juggling to get these into the app context.

You still need to have a connection string, but you can leave it blank:

  <add name="DefaultConnection" 
      providerName="System.Data.SqlClient" />

For testing purposes, you should also add your own AppSetting by adding this to the web.config:

      value="<my connection string>"/>

Then, modify the Application_Start to read the appSetting and hack it into the connection string:

protected void Application_Start()
  var connections =
    defaultConnection = null;
  foreach (ConnectionStringSettings c in connections) 
    if ("DefaultConnection".Equals(c.Name))
      defaultConnection = c;
  var fi = 
          BindingFlags.Instance | 
      fi.SetValue(defaultConnection, false);
  defaultConnection.ConnectionString = 

Credit for this technique goes to David Gardiner (VB example available there too if you need it for some reason).

Getting a list of all colors on a webpage

To get all colors on a page with jQuery, you can use the “*” selector to get all DOM elements, call “getComputedStyle” on each element, and then de-duplicate the results:

var elements = $('*');
function getColor(i, v) { 
  return window.getComputedStyle(v).color 
colors = $.unique(;
Unfortunately this gives you just CSS, and not images, but still pretty useful:
["rgb(66, 137, 186)", 
 "rgb(131, 131, 131)", 
 "rgb(119, 119, 119)", 
 "rgb(0, 0, 0)", 
 "rgb(66, 137, 186)", 
 "rgb(119, 119, 119)", 
 "rgb(101, 101, 101)", 
 "rgb(255, 255, 255)", 
 "rgb(119, 119, 119)"]

As a potentially better approach, you can group all the DOM elements together (which would let you change color later)

function col(v) { 
  return window.getComputedStyle(v).color;
_.groupBy(elements, col)

Scala fold example (fold over tuples)

The variations on “fold” let you condense a Scala collection into a single value.

This example shows how you might find a named column in a CSV file, by calling “zip” on the list of headers and cells, then doing a lookup with foldLeft:

val searchColumn = "Name"
val headers = "Name,Quantity,Amount"
val row = "Gary,10,100"
   ).foldLeft("<default value>")(
     (prior, dataTuple) => {
       val (headerName, cell) = dataTuple
       if (headerName.equals(searchColumn)) cell else prior

Migrating a SQL Server Database from AppHarbor to Amazon RDS

SQL Server Management studio provides a tool to migrate data between two different databases. In my testing I found that it does not copy constraints, but it is fairly fast to transfer data, as it copies tables in parallel. To resolve that, I’ve found that using Liquibase to build the new database first, and SQL Server Management Studio to transfer data works fairly well.

Preparation steps:
1. If using an AWS reseller, determine what datacenter they put the database in
2. Determine which version of SQL Server the original database is


Microsoft SQL Server 2008 R2 (RTM) – 10.50.1600.1 (Intel X86) Apr 2 2010 15:53:02 Copyright (c) Microsoft Corporation Express Edition with Advanced Services on Windows NT 6.0 (Build 6002: Service Pack 2) (Hypervisor)

3. Determine how much disk space your database uses. For this you can attempt a query, but you may have an easier time consulting the administration website for your existing database.

4. Create a new database in RDS within the same datacenter, using the version number found in step 2.

5. Take a database backup of the original. Unfortunately SQL server is terrible in shared environments, so your best bet may be SQL Backup and FTP, which is difficult to restore from.

6. Create a readonly account in the source database to work with (this prevents accidental modifications)

7. Check which tables are the largest – if you have tables containing audit history or logs, you may be surprised at their size. If you want to make the migration faster, you’ll need to delete old entries.

If you use autoincrement fields, you will also need this information, because you will need to set the starting values in the new database on creation.

    , sysindexes.ROWS
    INNER JOIN sysindexes
    ON =
    TYPE = 'U'
    AND sysindexes.IndId < 2

8. Download Liquibase and the SQL Server jdbc driver.

9. Create a configuration fil to point to both databases:



10. Have liquibase diff the “old” and “new” databases. This compares the two schemas (one being empty) and tells you how much work it’s going to do when you actually run it.

liquibase diff
Unexpected Index(s): NONE
Changed Index(s): NONE
Missing Primary Key(s):
     PK_ABExperiments_1 on ABExperiments(UniqueId, Experiment)
     PK_AccountCredits on AccountCredits(AccountId)
Unexpected Primary Key(s): NONE
Changed Primary Key(s): NONE
Missing Schema(s): NONE
Unexpected Schema(s): NONE
Changed Schema(s): NONE
Missing Sequence(s): NONE
Unexpected Sequence(s): NONE
Changed Sequence(s): NONE
Missing Stored Procedure(s): NONE
Unexpected Stored Procedure(s): NONE
Changed Stored Procedure(s): NONE
Missing Table(s): NONE
Unexpected Table(s): NONE
Changed Table(s): NONE
Missing Unique Constraint(s): NONE
Unexpected Unique Constraint(s): NONE
Changed Unique Constraint(s): NONE

11. Have Liquibase generate a change log:

liquibase diffChangeLog

This produces an XML file with all the changes required to produce the new database. You will likely need to edit this file.

12. You can attempt to run the change log, to determine what problems you’ll hit, but running:

liquibase update

13. If you use autoincrement fields, you will need to set the starting values. I picked a value that was larger than any of mine, because it’s easier to find-replace in this file:

<column autoIncrement="true" startWith="150000" name="Id" type="int">

14. You will likely need to re-create the database several times to get this right. There are a few types it struggles with (e.g. blob types) so you may want to watch for these and change them in the source system.

15. Before running a data migration, you need to disable foreign keys.


16. You can then suffer through the SQL Server UI.

Select “SQL Server Authentication”:

17. For each table you have to edit the migration settings and check “Set insert on” for each table. This setting is designed for migrations, because you can only set it on one table at a time – it allows you to update the values of “identity” columns.


18. Run the data migration and wait.


19. Remove the tables that Liquibase creates (unless you need these)


20. Re-enable all the constraints:

EXEC sp_msforeachtable "ALTER TABLE ? CHECK CONSTRAINT all"

21. Verify that the application works.

Fixing Oracle installer error PRVF-7501 : Sufficient space is not available at location “/tmp”

If your “/tmp” location is limited in space, you can experience the following error installing Oracle 12c:

INFO: INFO: ERROR: [ResultSet.addErrorDescription:1078] 
PRVF-7501 : Sufficient space is not available at location "/tmp" on node "ora" 
[Required space = 1GB ]

To fix this (and avoid various similar errors), you should make a location on a drive that you can write to (such as “/root/tmp” in my case). Then, make a shell script you use to call the Oracle installer, and change all tmp variables:

export TEMP=/root/tmp
export TMP=/root/tmp
export TMPDIR=/root/tmp

“TEMP” affects the error listed, which occurs a long way into the installer. TMPDIR affects the space check at the beginning.

Installing Solr 5.x on CentOS 7

To install Solr 5.+ on centos 7, you need a JDK, preferably 64 bit:

yum search openjdk
yum -y install java-1.8.0-openjdk.x86_64

Then, download from a mirror and extract Solr:

tar xvf solr-5.2.1.tgz

If everything worked, it should be simple to start:

cd solr-5.2.1
bin/solr start -h -p 8000

To test that Solr is working, you can use wget. This won’t work until it starts up, which takes a 10-30 seconds:

wget localhost:8000


--2015-08-02 18:57:19--  http://localhost:8000/
Resolving localhost (localhost)... ::1,
Connecting to localhost (localhost)|::1|:8000... connected.
HTTP request sent, awaiting response... 302 Found
Location: http://localhost:8000/solr/ [following]
--2015-08-02 18:57:19--  http://localhost:8000/solr/
Reusing existing connection to [localhost]:8000.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html’
[ <=>                    ] 6,079     -.-K/s   in 0s
2015-08-02 18:57:19 (125 MB/s) - ‘index.html’ saved [6079]

If this fails after waiting, run solr again with “-f” to see the error.

bin/solr start -p 8000 -h -p 8000

If you run out of memory, add the “-m” argument (e.g. -m 256m) – by default Solr uses 512mb.

bin/solr start -h -p 8000 -f -m 256m

To access Solr’s administration pages, run SSH and enable port forwarding. With Putty this means adding “-L 8000:localhost:8000” (for our example) to the command line arguments. Once you connect this way, you can use “http://localhost:8000/solr/#/” to connect to the Solr admin panel.

Fixing “Cannot allocate memory” with Solr 5.2.1

If you run Solr on a VM, it may appear to start and fail.

bin/solr start -p 8000 -f

If you run it with “-f” (preventing backgrounding the process) you will then seen an out of memory error:

Starting Solr on port 8983 from /root/solr-5.2.1/server

OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000e8000000, 402653184, 0) failed; error='Cannot allocate memory' (errno=12)
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 402653184 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /root/solr-5.2.1/server/hs_err_pid16824.log

This is easily fixed by adding a memory argument (-m) – the default is 512 so if you’re hitting this error you’re likely using a cheap VM and need to lower this, or rebuild it with more RAM:

bin/solr start -p 8000 -f -m 256m

This is what you should see:

Starting Solr on port 8000 from /root/solr-5.2.1/server

0    [main] INFO  org.eclipse.jetty.util.log  [   ] – Logging initialized @1403ms
816  [main] INFO  org.eclipse.jetty.server.Server  [   ] – jetty-9.2.10.v20150310
914  [main] WARN  org.eclipse.jetty.server.handler.RequestLogHandler  [   ] – !RequestLog
948  [main] INFO  org.eclipse.jetty.deploy.providers.ScanningAppProvider  [   ] – Deployment monitor [file:/root/solr-5.2.1/server/contexts/] at interval 0
3991 [main] INFO  org.eclipse.jetty.webapp.StandardDescriptorProcessor  [   ] – NO JSP Support for /solr, did not find org.apache.jasper.servlet.JspServlet
4030 [main] WARN  [   ] – ServletContext@o.e.j.w.WebAppContext@457e2f02{/solr,file:/root/solr-5.2.1/server/solr-webapp/webapp/,STARTING}{/solr.war} has uncovered http methods for path: /
4136 [main] INFO  org.apache.solr.servlet.SolrDispatchFilter  [   ] – SolrDispatchFilter.init()WebAppClassLoader=1582797472@5e5792a0
4164 [main] INFO  org.apache.solr.core.SolrResourceLoader  [   ] – JNDI not configured for solr (NoInitialContextEx)
4165 [main] INFO  org.apache.solr.core.SolrResourceLoader  [   ] – using system property solr.solr.home: /root/solr-5.2.1/server/solr
4167 [main] INFO  org.apache.solr.core.SolrResourceLoader  [   ] – new SolrResourceLoader for directory: '/root/solr-5.2.1/server/solr/'
4392 [main] INFO  org.apache.solr.core.SolrXmlConfig  [   ] – Loading container configuration from /root/solr-5.2.1/server/solr/solr.xml
4534 [main] INFO  org.apache.solr.core.CoresLocator  [   ] – Config-defined core root directory: /root/solr-5.2.1/server/solr
4566 [main] INFO  org.apache.solr.core.CoreContainer  [   ] – New CoreContainer 2073621255
4567 [main] INFO  org.apache.solr.core.CoreContainer  [   ] – Loading cores into CoreContainer [instanceDir=/root/solr-5.2.1/server/solr/]
4567 [main] INFO  org.apache.solr.core.CoreContainer  [   ] – loading shared library: /root/solr-5.2.1/server/solr/lib
4568 [main] WARN  org.apache.solr.core.SolrResourceLoader  [   ] – Can't find (or read) directory to add to classloader: lib (resolved as: /root/solr-5.2.1/server/solr/lib).
4603 [main] INFO  org.apache.solr.handler.component.HttpShardHandlerFactory  [   ] – created with socketTimeout : 600000,connTimeout : 60000,maxConnectionsPerHost : 20,maxConnections : 10000,corePoolSize : 0,maximumPoolSize : 2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy : false,useRetries : false,
5305 [main] INFO  org.apache.solr.update.UpdateShardHandler  [   ] – Creating UpdateShardHandler HTTP client with params: socketTimeout=600000&connTimeout=60000&retry=true
5311 [main] INFO  org.apache.solr.logging.LogWatcher  [   ] – SLF4J impl is org.slf4j.impl.Log4jLoggerFactory
5327 [main] INFO  org.apache.solr.logging.LogWatcher  [   ] – Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
5330 [main] INFO  org.apache.solr.core.CoreContainer  [   ] – Node Name:
5331 [main] INFO  org.apache.solr.core.CoreContainer  [   ] – No authentication plugin used.
5547 [main] INFO  org.apache.solr.core.CoresLocator  [   ] – Looking for core definitions underneath /root/solr-5.2.1/server/solr
5599 [main] INFO  org.apache.solr.core.CoresLocator  [   ] – Found 0 core definitions
5609 [main] INFO  org.apache.solr.servlet.SolrDispatchFilter  [   ] – user.dir=/root/solr-5.2.1/server
5610 [main] INFO  org.apache.solr.servlet.SolrDispatchFilter  [   ] – SolrDispatchFilter.init() done
5651 [main] INFO  org.eclipse.jetty.server.handler.ContextHandler  [   ] – Started o.e.j.w.WebAppContext@457e2f02{/solr,file:/root/solr-5.2.1/server/solr-webapp/webapp/,AVAILABLE}{/solr.war}
5693 [main] INFO  org.eclipse.jetty.server.ServerConnector  [   ] – Started ServerConne

Validating Application Performance in a cloud environment, using C#, Hangfire, and RethinkDB

The rise of “platform” sites (e.g. Heroku) enables developers to build and deploy web applications cheaply, without understanding operational problems. Typically these products let you purchase a combination of three things: web servers, databases, and background job execution. This typically works well, as long as you stay within their platform, and as long as their pricings maps to what you are doing.

There are several “platform” vendors that are essentially AWS resellers, so you should be able to mix-and-match application services. For instance, if you wanted to upgrade a database, you can purchase a database from a company like or from Amazon directly via RDS.

To test databases sold by different vendors, I built modified implementations of the C# database drivers which run the queries against multiple databases in parallel. These buffer the results, and log them to RethinkDB in a background thread using Hangfire. When you want a database connection, you do this:

connection = 
  new ProxyDbConnection(
    new List<IDbConnection>() {
      new SqlConnection(ConfigurationManager.ConnectionStrings["db1"].ToString()),
      new SqlConnection(ConfigurationManager.ConnectionStrings["db2"].ToString())

Underneath, these log what you do to them, which lets us compare performance:

The log messages record everything we know about a transaction – the query, a hash of it’s parameters, the database, and an ID that links all of the concurrent operations into one group:

public class TimingRecord
  public String RunId { get; set; }
  public String DbId { get; set; }
  public DateTime RunDate { get; set; }
  public double? Duration { get; set; }
  public String Query { get; set; }
  public String QueryParms { get; set; }
  public String IP { get; set; }
  public String ErrorMessage { get; set; }

When a query completes or errors out, these messages are saved to a thread-safe collection. If the collection reaches some size, it is serialized and saved to RethinkDB in the background If RethinkDB shuts down or is not configured correctly, this behavior is no worse than it was prior to the introduction of this logging.

For background threads, I’m using Hangfire, which is nice in that you can make any public method with serializable arguments into a background process.

if (_buffer.Count >= 50)
   ConcurrentQueue<TimingRecord> dataToSend;
   lock (_buffer)
     dataToSend = _buffer;
    _buffer = new ConcurrentQueue<TimingRecord>();
   BackgroundJob.Enqueue(() => SaveTimingLog(dataToSend.ToArray()));            

Because RethinkDB lets you upload any object, saving the log messsages is trivial:

var ConnectionFactory = ConfigurationAssembler.CreateConnectionFactory("logging");
using (var connection = ConnectionFactory.Get())
  var table = Query.Db("performance").Table<TimingRecord>("query_timings");

When you run a query using the proxy database connection, it runs against each of the provided connections:

var queryResults =
		(_command) => {

Each execution returns an “Either” monad with the results or an exception. This type is fairly complex, so it is aliased as as type called “QueryResults”.

using QueryResults = 

Once the parallel loop finishes, we get all the results, treating the first as the “primary” database that the end user will see.

Once a bit of this data is logged, we can run some queries against the log RethinkDB to see which behaves better. RethinkDB uses Javascript for it’s query engine. Unfortunately there is no concise way to demonstrate the queries for this project, since anything complex ends up a series of map-reduce operations.

   function(left, right) {
     var original = left('DbId').eq('db1') ? right : left;
     var newDb = left('DbId').eq('db2') ? left : right;
     var d1 = original('Duration');
     var d2 = newDb('Duration');
     var delta = 
     return {
       'Duration1': d1,
       'Duration2': d2,
       'Delta': delta,
       'Duration': left('Duration')
     .map(function(group) {
      return {
        'id': group('group'),
        'Delta': group('reduction')('Delta')
     }).map(function(row) {
       return {
         'id': row('id'),
         'Delta': row('Delta'),
         'Count': 1
    .reduce(function(left, right) {
      var Delta = left('Delta').add(right('Delta'));
      var Count = left('Count').add(right('Count'));    
      return {
        Delta: Delta,
        Count: Count,
        Avg: Delta.div(Count)

Example output:

    "group": 1 ,
    "reduction": {
      "Avg": 26548.960784313724 ,
      "Count": 51 ,
      "Delta": 1353997

If this were a virtualized or bare metal environment, we’d be looking to answer several questions:

  • Are these servers in the same data center?
  • If so, will the network traffic stay within the datacenter?
  • Is the underlying hardware being shared with other applications?
  • How often will the database experience due to memory or lock contention?
  • How do I know if I’ve overprovisioned?
  • How do I know if I’ve migrated the data correctly?

I’m most concerned with database performance, as this is the hardest to get right.

In this environment, testing performance and error rates is typically sufficient, since the only thing within your control is your choice of vendors and database migration, but this technique lets you prove out infrastructure changes.

While virtualized infrastructure doesn’t let you control the entire environment, you typically still have visibility into how it works. If you build everthing with virtual machines, you rely on your hosting provider to get network routing in their data center correct, but you can still use tracert to verify that traffic stays where it should.

A friend in finance tells me when rolling out product updates, they run the new version alongside the old for a month, to prevent the introduction of defects. Rather than invent replacements for tools like tracert, it’s far easier to run the same application with two different architectures, to see which works best.

If you’re interested in the full source to the C# solution, it is available on github.