Blog about things

blogging experiment about hacking

Kicking the Tires of coreOS

I’m going to try out CoreOS to create a cluster of machines to host an app consisting of docker containers.

CoreOS is a linux distribution designed to run clusters of containers efficiently and securely. Our application components run in Docker containers, organized as services. The distribution also includes etcd, a key/value store for distributed configuration management and service discovery; and fleet to manage services across clusters. The machines can be automatically updated with any security fixes and patches.

First, let’s create a CloudFormation template we will use to create our stack. I got a minimal template for CoreOS here, adding parameters for VPC subnets and related availability zones. I also updated the AMI for us-east-1, where I’m deploying this cluster, to the most recent stable version of CoreOS.

Docker Websocket Chat Sample

We’ve decided to use docker containers to package and deploy application components as nano services. This post documents initial experimentation to build and run a sample application as a service.

The application is a spring integration stomp over websockets chat sample app. It is a java application running in an embedded tomcat container, packaged as a stanadalone (uber) jar.

This type of application seems to be a great fit for a docker container

  • self-contained; includes dependencies
  • provides a service
  • consists of one process
  • can be used by other services (or people but we’ll pretend it’s a messaging service for our applications)

Testing Apollo With Websocket

We now have the apollo broker running on our virtual host and URI on our webserver to hit to generate messages.We want to display those messages near real-time in a web browser on the host machine.

Our setup

  • web server and PHP backend app are running on our virtual machine (IP=10.0.0.10).
  • apache apollo broker is running on the virtual machine.
  • our webapp server will send a test message to tcp://localhost:61613/topic/okcra-api-ops when URI /sendStompMessage is hit.
  • added a firewall rule on virtual machine to allow access to port tcp/61623 where apollo is listening for websocket connections.

web client

Apollo is shipped with example code, one of which is a websockets example. The html page is located at /examples/stomp/websocket/index.html.

We will load that page into our browser, fill in our connection details, and connect to the apollo broker on our virtual hosts. When successful, it will wait and display any new messages appearing in the topic or queue we have registered interest in.

Fun With puPHPet

puPHPet is a nifty tool to help configure a virtual PHP development evironment. It uses vagrant to manage virtual machines with puppet to configure the machine. It’s a great start but we need some additional configurations. This is the story of extending puPHPet to our needs.

puppet hiera

Recent puPHPpet uses puppet’s hiera facility to provide configuration information to puppet at runtime. I would like to utilize hiera for our additional configurations but puPHPet only seems to utilize hiera sources for a small subset of parameters. puPHPpet help mentions a common.yaml hiera file but the code uses config.yaml instead. That’s confusing but ../puphpet/puppet/hiera.yaml spells it out for us.

1
2
3
4
5
6
7
--
:backends: yaml
:yaml:
    :datadir: '/vagrant/puphpet'
:hierarchy:
    - config
:logger: console
read hiera from yaml

Puppet provides a command-line tool to test the hiera lookup. Here is a simple example running on the virtual machine.

Apache Apollo Start

Getting started with Apache Apollo, follow-up project to ActiveMQ

Apache apollo is an all-dancing, all-singing message broker, queue manager, integration engine, etc. written in scala, running on a JVM. It is the follow on to activeMQ, written in scala.

Web Site: http://activemq.apache.org/apollo/

My steps to install a new v1.7 apollo broker for first steps testing and proof of concept …

  • download the gzip’d tar file
  • i uncompressed and exploded into /opt/apache-apollo-1.7/
  • created a broker named okc-broker
1
$ sudo /opt/apache-apollo-1.7/bin/apollo okc-broker
  • created link for init service
1
$ sudo ln -s /opt/okc-broker/bin/apollo-broker-service /etc/init.d/apollo-broker-
  • start service
1
$ sudo service service apollo-broker-service start

python test client CLI for stomp messaging protocol

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
$ sudo pip install stomp.py

$ python /usr/local/lib/python2.7/dist-packages/stomp -H localhost -P 61613 -V VERBOSE -U admin -W password
 -- or -- stomp ... || stomp --help

> subscribe /topic/okcra-api-ops
Subscribing to "/topic/okcra-api-ops" with acknowledge set to "auto", id set to "1"

> send /topic/okcra-api-ops 'howdy'

'howdy'

> unsubscribe /topic/okcra-api-ops
Unsubscribing from "/topic/okcra-api-ops"
> send /topic/okcra-api-ops 'howdy'
> subscribe /topic/okcra-api-ops
Subscribing to "/topic/okcra-api-ops" with acknowledge set to "auto", id set to "2"
>
> send /topic/okcra-api-ops 'howdy again'

'howdy again'

> exit

The default config requires user authentication. There is one configured user with all rights - admin:password.

The broker configuration, logs, data, etc are under the broker directory; mine was created at /opt/okc-broker.

a PHP stomp client

I haven’t used PHP in a very long time and then knew nothing about frameworks, autoloaders, etc so this is a bit of a struggle but it’s a start.

install the client package

Our backend app uses composer to load packages so I’m going to integrate with that. I also found PECL packages but picked composer for this attempt.

I browsed packages at https://packagist.org/ and found fusesource/stomp-php.

I added this line to the app’s composer.json file “fusesource/stomp-php”:“2.1.1”

I then installed composer locally following instructions at composer and ran the command to update the app’s dependencies.

1
2
3
$ curl -sS https://getcomposer.org/installer | php
$ mv composer.phar /usr/local/bin/composer
$ composer update

php code

There are example programs in the stomp-php package, under /vendor/fusesource/stomp-php/examples/.

Some Elasticsearch Features

Some nifty features of elasticsearch

distributed, highly available, redundant, and horizontally scalable architecture document store using API language clients or HTTP REST interface indexes are like tables in RDBMS, types are like tables Every field in a document is indexed and can be queried. CRUD operations are easy optimistic concurrency control on update/delete ops using version parameter automatic versioning update operation merges desired changes groovy scripting by default, available in request body idempotent operations retry on update with retry_on_conflict parm bulk operations each document in an index has a type. Every type has its own mapping or schema definition.

simple value searches, ranges, etc indexed, analyzed (tokenization, normalization), analyze API _all is a system-generated full-text field which can be disabled full-text search relevance scores in full-text search results phrase searches highligting results sorting results, relevance by default or can specify with sort parm filter DSL (term, terms, range, exists, missing, bool,) query DSL (match_all, match, multi_match, bool,) analytics aggregations, nested

Querying Elasticsearch

This post will begin our look at querying elasticsearch directly, via it’s search API. We’ve looked at reporting and graphing tools like Kibana which provide some veneer over the actual queries. Now we’ll see what the queries and responses look like under the covers.

The first query we’ll make will search an entire index with no filter provided - we will just dump the data content.

The API is accessible via an HTTP or HTTPS URI using the POST command. There are many search flavors available, documented in detail at the elasticsearch search API; we’ll just touch the surface here. The search API is accessible using a query parameter or request body. The query parameter is limited but good for some testing so we’ll use that first.

The simplest search query ever …

The URI structure to invoke the simplest elasticsearch query API looks like this:
http(s)://logsene-receiver.sematext.com/OUR-LOGSENE-APP-TOKEN/_search

Kibana and Logsene

Kibana is the name of a visualization tool for elasticsearch; it runs in your web browser. Kibana enables you to query and view records from your elasticsearch repository. It’s easy to host Kibana yourself or, as we are doing here, use a hosted version.

The data and query interface are the same as we saw in the reporting entry, of which we’ll see more detail when we get to elasticsearch.

Query Our Log Records in Logsene

Another way to query our log data is via the exposed ElasticSearch REST API. For more info see Search through Elasticsearch API and ElasticSearch API

From the referenced document:
When you use the API, here are the things you need to know:

  • host name: logsene-receiver.sematext.com
  • port: 80 (443 for HTTPS)
  • index name: your Logsene application token - note that this token should be kept secret

Searching

Let’s assume you want to search through your syslog events from the “user” facility. You could do something like this:

1
2
curl https://logsene-receiver.sematext.com/LOGSENE-APP-TOKEN/syslog/_search?q=facility:user

Viewing Log Reports in Logsene

Now that our log records are stored and queryable by the logging service we’d like to make use of our data. We can generate reports about whatever interests us, be it usage patterns, errors, etc.

The log records can be parsed into discrete fields which can be indexed and subsequently searched and filtered using some exposed form of query. This is extremely valuable with distributed log sets.

For example I can ask to see all of yesterday’s apache access log entries for a particular application where the HTTP return code was >= 400, whether the application was running on one, a dozen, or a hundred servers.

This entry will explore Logsene’s reporting interface. We’ll also report on Logsene’s more obvious elastic search interfaces in later entries.