Mihai Stancu

Notes & Rants

Multiple apps in one repo with Symfony2 — 2015-10-03

Multiple apps in one repo with Symfony2

My requirements:

  • Moving application specific configurations into separate application bundles (not separate app/ folders)
  • Retaining common configurations in the app/config/config_*.yml files
  • Retaining common practices such as calling app/console just adding a parameter to specify the application


  1. Change your apache2 vhost to add a (conditional?) environment variable
    # ...
    # The RegEx below matches subdomains
    SetEnvIf Host nth\..+? SYMFONY_APP=nth
    # ...
  2. Create app/NthKernel.php which extends AppKernel
  3. Overwrite NthKernel::$name = 'nth'
  4. Overwrite NthKernel::serialize, NthKernel::unserialize to ensure the correct name is kept after serialization/deserialization
  5. Overwrite NthKernel::getCacheDir to ensure the cache dirs are split based on the application name:

    public function getCacheDir()
       return $this->rootDir.'/cache/'.$this->name.'/'.$this->environment;

  6. Overwrite the NthKernel::registerContainerConfiguration to load configurations based on the application name and environment. In my case I loaded all config.yml files from any installed bundle:
    public function registerContainerConfiguration(LoaderInterface $loader)
        $env = $this->getEnvironment();
        foreach ($this->bundles as $bundle) {
            $dir = $bundle->getPath() . '/Resources/config/';
            if (file_exists($path = $dir . 'config_'.$env.'.yml')) {
            } elseif (file_exists($path = $dir . 'config.yml')) {
        $dir = __DIR__.'/config/';
        if (file_exists($path = $dir . 'config_'.$env.'.yml')) {
        } elseif (file_exists($path = $dir . 'config.yml')) {
  7. Change web/app.php/web/app_dev.php to ensure they instantiate NthKernel and NthCache based on the environment variable apache is providing (SYMFONY_APP):
    $app = ucfirst(getenv('SYMFONY_APP'));
    require_once __DIR__.'/../app/AppKernel.php';
    require_once __DIR__.'/../app/'.$app.'Kernel.php';
    //require_once __DIR__.'/../app/AppCache.php';
    //require_once __DIR__.'/../app/'.$app.'Cache.php';
    $kernel = $app.'Kernel';
    $kernel = new $kernel('dev', true);
    //$cache = $app.'Cache';
    //$kernel = new $cache($kernel);
  8. Change app/console to allow you to specify which application you need to use
    $app = ucfirst($input->getParameterOption(array('--app', '-a'), getenv('SYMFONY_APP')));
    $env = $input->getParameterOption(array('--env', '-e'), getenv('SYMFONY_ENV') ?: 'dev');
    // ...
    /* Move require_once after you initialized the `$app` variable */
    require_once __DIR__.'/AppKernel.php';
    require_once __DIR__.'/'.$app.'Kernel.php';
    $kernel = $app.'Kernel';
    $kernel = new $kernel($env, $debug);
    $application = new Application($kernel);
           new InputOption(
               'The Application name.',
  9. Use app/console by specifying the application you need to use
    app/console --app=nth --env=dev debug:router
    app/console --app=nth --env=dev debug:container

Other resources:

JoliCode wrote this article on the topic.

Their approach on the problem seems more idiomatic — creating a structure application specific subfolders (apps/nth) each with its own AppKernel, apps/nth/cache and apps/nth/config etc..

A collection of thoroughly random encoders — 2015-10-01

A collection of thoroughly random encoders

Serialization and Encoders

There’s a nicely designed Serializer component within Symfony which allows you to convert structured object data into a transportable or storeable string (or binary) format.

The nice thing about the symfony/serializer design is that it separates two major concerns of serialization: 1) extracting the data from the objects and 2) encoding it into a string.

The extraction part is called normalization wherein the structured object data is converted into a common format — usually easier to encode / supported by all encoders — for example that format could be an associative array.

The encoding part takes the normalized data and creates a string (or binary) representation of it ready to be transported or stored on disk.

The extra encoders I bundled together

The bundle is a collection of general purpose serialization encoders I scavenged while investigating what options there are in this field, what purposes they serve, how efficient they are in usage (from multiple perspectives).

Fully working PHP encoders: Bencode, BSON, CBOR, Export, IGBinary, MsgPack, Serialize, Tnetstring, UBJSON and YAML.

Partial PHP implementations: Sereal and Smile and PList.

No PHP encoders found: BINN, BJSON, JSON5 HOCON, HJSON and CSON.

Of which:

  • bencode does not support floats.
  • PList has a full PHP encoder but the API requires encoding each scalar-node individually (instead of receiving one multilevel array).

How to judge an encoder

Reference points:

  1. Raw initial data discounting the data structure overheads
    A PHP array composed of key/value pairs of information (an invoice containing a vendor, a client and a number of products each with their specific details);
  2. Access time walking over and copying all raw data
    Using array_reduce to extract all key/value pairs and evaluating their respective binary lengths.


  1. Read speed
    In most applications decoding the data is a more frequent operation than encoding it. Is it fast enough?
  2. Write speed
    If the data was supposed to be transported/communicated from endpoint to endpoint then writing speed should be the second highest concern. If it’s supposed to be stored (semi)persistently then perhaps memory/disk usage should gain higher priority.
  3. Disk space usage
    Compared to the initial data how much more meta-data do you need?
  4. Compression yield
    Is the compressed version of the string significantly lower than the uncompressed version?
  5. Compression overhead
    How much time does the compression algorithm add to the process?
  6. Memory usage
    Is the memory allocated when reading/writing data from/to the serialization blob comparable to the raw data?
  7. Easy to read by humans
  8. Easy to write by humans
  9. Community and support

Analysis of the benchmark data (tables below):

Time is expressed as percent (ex.: decoder read time divided by raw php access time).
Disk usage is expressed as percent (ex.: encoded data length divided by raw data length).

  1. IGBinary and MsgPack and BSON seem to win across the board (read, write, disk usage).
  2. Serialize, JSON and YAML are pretty good at reading and writing but have higher disk usages.
  3. All of the php extensions are much faster than any of the pure php implementations (msgpack, igbinary, bson, serialize, json, export, yaml).
  4. All of the php extensions are much faster even than using array_reduce recursively on the raw array data (wth?).
  5. GZipping encoded data makes the disk usage almost the same as that of the raw data — sweet.
  6. BZipping has (marginally) less compression (~10%) performance but takes much more time to compress.
  7. The time required for GZipping nearly equal to the encoding time of the fastest of the encoders.
  8. The fastest human readable/writable formats (JSON and YAML when using the php extensions) are still 2x/7x slower than their binary counterparts.
  9. BSON and MsgPack seem to have very active communities and are used in important projects such as MongoDB and Redis (respectively).
  10. JSON is by far the most popular and ubiquitous of the encoders and is used for all sorts of purposes: communication, storage, logging, configuration; its human readability/writeability is what permits half of those purposes to work.

Benchmark data:

Encoding the data

Format Read time Write time Disk usage
igbinary 4.4137 5.5693 126.625
bson 4.4225 3.6207 162.075
msgpack 5.0974 3.1087 135.915
serialize 6.196 4.2387 198.18
json 13.3619 7.7793 154.12
export 15.0877 9.8206 311.725
yaml 26.5641 21.2818 171.57
tnetstring 181.053 142.24 160.635
xml 182.4433 243.1294 194.39
bencode 261.363 110.4493 148.705
cbor 296.5037 200.4747 136.04
ubjson 346.5415 241.0281 153.615

Encoding + GZipping the data

Format Read time Write time Disk usage
igbinary 9.5062 16.2105 99.245
bson 9.9311 15.5781 105.86
msgpack 10.3767 14.1905 96.21
serialize 12.0267 16.6106 108.44
json 18.759 18.8141 94.905
export 21.6112 23.7916 108.83
yaml 32.2774 33.3446 98.75
tnetstring 186.9992 153.6501 101.23
xml 187.9654 254.5289 106.205
bencode 266.7813 121.1408 95.455
cbor 301.6611 210.844 92.725
ubjson 351.9681 252.3896 100.985

Encoding + BZipping the data

Format Read time Write time Disk usage
igbinary 18.8168 64.4219 106.71
bson 20.3083 73.1522 111.38
msgpack 20.4046 66.812 107.625
serialize 23.8041 79.7363 114.46
json 28.1694 69.9902 102.09
export 34.218 111.2053 114.56
yaml 42.3537 88.8431 104.155
tnetstring 198.5296 210.8589 109.465
xml 199.7003 315.5523 114.08
bencode 276.3642 169.8964 104.1
cbor 310.9098 257.9502 103.005
ubjson 362.4339 309.8795 108.735
MultiKeyMaps for Tuples — 2015-09-14

MultiKeyMaps for Tuples

Understanding MultiKeyMaps of Tuples

A database table is an on-disk Vector of Records, while its Indexes are key/value pair Maps of String to Record. You can filter, sort and group it by any column Indexed column (or actually any column within the Record).

Taking this concept from RDBMSs (an on-disk environment) and moving into a purely in-memory environment, adding all the possible optimizations for memory efficiency and speed and that’s a MultiKeyMap of Records.

Basically it’s just another container type that I think should be in the standard libraries of (m)any languages.

What kinds of problems would it fix?

I’ve seen programmers insert stuff into temporary/in-memory tables in order to perform filtering, sorting and grouping on multiple keys (sometimes for relatively small data-sets).

Those programmers could have written their own filtering and sorting logic but it would have been far more error prone, it would require testing, it would have been more time consuming and quite possibly less memory/speed efficient (in an interpreted language compared to a compiled library).

I would assume that those kinds of data structures are already present in RDBMS libraries — granted they would tend to be very purpose-written including the on-diskyness aspect they would naturally inherit.

Standard container libraries rarely make assumptions about the data they carry in order to provide the most abstract, least presumptive and most flexible tool they can. This is likely why this concept wouldn’t easily take root in a statically typed language but even without the use of dynamic reflection / runtime information it could still be implemented sufficiently abstracted to do its job well.

How do we currently bridge this gap?

Doctrine’s ArrayCollections with Criteria based filtering is an ORMs approach to filtering a Collection of Records. The presumption exists that the Collection is composed of some kind of logical Records whose fields are accessible.

It is used in conjunction with mapped relationships (1:n, m:1, m:n) between entities which are sometimes big enough to require subfiltering.

    $expr = Criteria::expr();
    $criteria = Criteria::create();
    $criteria->where($expr->gte('start', $start));
    $criteria->andWhere($expr->lte('end', $end);
    $subselection = $collection->matching($criteria);;

The above code would either amend the ORM query created to fetch the data from the RDBMS or if the data is already present it would iterate over the collection and find elements matching the specified criteria. The filtering would obviously (unnecessarily) occur every time.

How would I do it?

class IndexedCollection extends ArrayCollection {
    /* indexing logic: https://github.com/mihai-stancu/collections/blob/master/IndexedCollection.php */ 

class Record {
    /* Simple structured class containing an email property */

$records = new IndexedCollection(
        'id' => true,
        'email' => true,
        'lastname' => false,

if ($records->containsKeyBy('email', 'contact@example.com')) {
    $record = $records->getBy('email', 'contact@example.com');
    $records->removeBy('email', 'contact@example.com');

if (isset($records['lastname:Smith'])) {
    $record = $records['email:Smith'];

Admittedly the above is missing some syntactic sugar one could only get if the host language would be cooperative in this endeavor, but it gets the point across.

Databases don’t do trees — 2015-09-11

Databases don’t do trees

Convoluted solutions

Storing tree-like data structures (such as webpages, categories, blog comments, forum topics, nested menus, organization charts) in relational databases is a real challenge and even when you’re using ready-made optimized libraries it’s still convoluted.

  • adjacency list is recursive and don’t scale well;
  • materialized path uses unanchored text-comparison (like '%this%') — better scaling but no cigar;
  • nested sets (aka modified preorder tree traversal) are heavy maintenance: easy for data-reads one select query can fetch a lot of what you may need, but some update scenarios can touch half or more of the records in that table;
  • closure trees duplicate lots of record-to-record relationships and updates can remove/reinsert lots of record-to-record relationships;

All of the above patterns usually also use adjacency list in parallel to allow for consistency checks — in case the application-level implementation is bad or transactions aren’t used (or available) the trees can end up in incoherent states which need to be corrected (with more fallible code).

Would you try saying all of that in one single breath? It all seems too damn convoluted to work with.

Hats of to some

Oracle has some (syntactic) support for selecting hierarchical data from adjacency lists. It’s still searching for data recursively — all other patterns try to avoid this — but at least it’s being done without round-trip TCP communication from the application to the database and with possible query planner optimizations included.

But unfortunately there is not indexing support included with this Oracle feature which is where the support should be on this topic.

PostgreSQL offers syntactic support for querying materialized path columns as well as a custom data type for them with support for indexing said data type (using GiST).

Can you spell dysfunctional?

We need the ease of use of adjacency lists — only specify a parent — with the syntactical support Oracle offers them and the data type and indexing support PostgreSQL is offering.

To say this in one breath: CREATE INDEX category_parent ON category(parent) USING HIERARCHICAL, why doesn’t this just work after 40 years of SQL evolution?

Liquid networks — 2015-08-26

Liquid networks

Chemical interaction

Gasses are a chaotic and dynamic state of matter. In gases elements can meet and mix and react, fast. Heat, pressure and newer more attractive reactions can separate the newly formed compound in a very short span.

Solids are orderly and still. In solids elements cannot move and reactions can take place only on the fringes of the solid. Unmoved the newly formed compound won’t enter any new reactions.

Liquids on the other hand can allow mixing and precipitation and thus reactions which take longer to catalyze will get a chance to occur. The newly formed compound can drift and enter new reactions.

Human interaction

A human network with the properties of a gas is like a boiler room or the open floor of a trade market with no walls, they talk to so many people under heat and pressure trying to find the most attractive deal. Things get done and things get undone quickly — there’s little time for creativity.

A human networks with the properties of a solid is like a rigid departments system with walls between them. They mostly talk among themselves and only communicate with certain other departments according to the normal workflow. Things get done a certain way and only that way — no room for creativity.

A human network with the properties of a liquid1 is an open space with just the most basic rules to not be disruptive and just the right amount of freedom to allow for ideas to flow and mix and combine in several stages before a good one can finally surface.

Everything is a file —

Everything is a file

UNIX invented it, BSD and Linux gave it to the world

Everything is a file is very successful paradigm in the UNIX/Linux communities which has allowed the kernel to simplify and uniformize how it uses devices which are exposed to the user as files. All files are treated as a bag of bytes. Reading/writing from a file is straightforward.

Besides actual data storage a lof ot fruitful exaptation has been derived from this paradigm and from the UNIX/Linux file system conventions:

  • Files, folders, symlinks, hardlinks, named pipes (fifo), network pipes, devices
  • Applications which handle readable files and can work together well (ex.: lines separated with \n, columns separated with \t): less/more, tail, head, sort, split, join, fold, par, grep, awk, colum, wc, sed, tee
  • Configuration management
  • Application storage
  • Library registry
  • Disk cloning
    • Disk images for backup (dd)
    • Smaller than disk size images (skip unused space)
    • Compress disk images on the fly without storing the uncompressed version (dd | gz)
    • Restoring disk images from backups
    • Disk recovery — HDDs, CDs, DVDs, USB sticks etc. — when they have bad sectors or scratches
    • Creating bootable USB sticks from a raw image file or an ISO (dd again)
  • Virtual filesystems
    • Mounting a raw image file or ISO as a filesystem
    • Mounting archives and compressed archives as a filesystem (tar, gz, bz, zip, rar)
    • Network filesystems look just like normal folders SAMBA, NFS
    • Using various network protocols as filesystems: HTTP, FTP, SSH
  • Searching everywhere (find, grep, sed)

Plan9 from Bell Labs made it better

Current UNIX/Linux distros don’t implement this paradigm fully — ex.: network devices aren’t files — but some less known systems do (such as UNIX successor plan9 / inferno and their Linux correspondent glendix).

The plan9 project went onward in applying the paradigm for:

  • Processes
    • Process management
    • Inter process communication
    • Client-Server network communication
  • Network related issues:
    • Network interfaces are files
    • Access rights to network interfaces is based on filesystem access rights to symlinks pointing to interface files
    • The filesystem (9P) extends over the network as a network communication protocol
  • Graphics interfaces and mouse IO

Other innovations it brought us (which got implemented in UNIX/Linux):

  • UTF-8 / Unicode
  • Filesystem snapshotting
  • Union filesystems
  • Lightweight threads
Paradigm — 2015-08-23


A distinct set of concepts or thought patterns 1
A world view underlying the theories and methodology of a particular scientific subject. 2
A framework containing the basic assumptions, ways of thinking, and methodology that are commonly accepted by members of a scientific community 3

I’d add that the concepts construct a particular context around themselves and the result is a guideline for derivative thinking based on the starting points.

But perhaps my understanding of this concept is skewed due to its use in the programming world especially to describe the need for a paradigm shift in order to correctly understand and use tools which are based on (initially) foreign concepts.