Fungible Clouds

Break the cloud services vendor lock-in

Using Chef to Deploy Cloud Applications

| Comments

Chef is a popular Apache licensed open source configuration management and automation tool for the cloud. It drives three core ideas in the cloud computing industry.

Fungibility

Chef routinizes the repeatable steps in cloud operations management and it does that in a way that is almost agnostic to the underlying cloud provider. Chef thus helps make applications almost agnostic to the underlying machines.

Using Chef to manage cloud applications makes cloud computer from a provider, say AWS EC2, fairly easily substituted by cloud computers from another provider, say HP Cloud.

As my friend Kevin Jackson describes, Cloud computing has several economic benefits, however, cloud computing would become even more economical if cloud computers were fungibile, meaning, a cloud machine from provider X would be practically no different than another cloud machine from provider Y. Fungibility would make cloud computers easily substitutable driving the price further down by increasing the competition and reducing the differention between providers of cloud computers. Fungibility, by the way, is not a property of eukaryotic organisms.

Routinizing the repeatable is key to successful operations management and is described in detail in a paper on Integrated Operations by Prof. William Lovejoy of The University of Michigan Business School, Ann Arbor and I quote…

If some task is to be repeated many times, it makes sense to find out the best way to perform the task and then require its execution according to that best practice. This means that in stable task environments, stable work routines and policies will be generated over time, and this is efficient. This derives from March and Simon’s (1958) model of organizational learning. The consequences for this are that one will want to consider the relationship between efficiency and discretion allowed workers in a stable environment.

The ability to routinize the repeatable and provide consistent environments from development, through testing and staging, to production is a key benefit to successful business operations in the cloud.

Idempotence

Chef operation (sudo chef-client) is idempotent, repeat runs will produce the exact same resulting machine configuration as the initial run did. Idempotence is the property of certain operations in mathematics and computer science, that they can be applied multiple times without changing the result beyond the initial application. The term was introduced by Benjamin Peirce in the context of elements of an algebra that remain invariant when raised to a positive integer power, and literally means the quality of having the same power, from idem + potence (same + power).

Idempotent operations enables consistently reproducible cloud environments for development and production use. It helps bring order and reduce chaos in business operations.

Embryos and DNA Injections

Ok, I admit, this is going to be an incorrect analogy from biological science perspective but it does seem to work for some people as an crude example to explain the logic. What you get to accomplish is to give life to, say a giraffe (Giraffa camelopardalis), a cow (Bos primigenius), a leopard (Panthera pardus), or a person (Homo sapiens), based on the DNA you inject into an embryo. On similar lines, you create a web server running nginx with a specific configuration, or a proxy running HAProxy, or a database master server running PostgreSQL or whatever you need, by asking Chef to run an appropriate set of cookbooks on top of a cloud machine running just enough OS or jeOS (pronounced as juice or jüs).

Chef helps spin up machines just the way you want with a specific set of software and specific configuration by building up from scratch right from bare metal machines loaded with just enough OS.

Putting these concepts to work

Let’s see in practise how these core concepts pan out in reality. This is best illustrated in form of a hands on exercise of creating an infrastructure in the cloud where we will have the production environment running first on a single server instance which is useful for rapid prototyping of apps while sharing a single machine among multiple applications to minimize cost. Once you’re comfortable with this basic all-in-one configuration, it’s relatively simple to scale it out, separating the various roles onto multiple machine instances. move I must caution you that this is a fairly elaborate setup that would be needed on your linux/unix workstation but fortunately it’s all pretty straightforward, and there’s a lot of good documentation available on the internet.

Credentials as Env Variables

| Comments

AWS credentials can either be passed in line (not ideal as your code is clutted with secret info) or it can be passed via environment variables (preferred method). The AWS tools require you to save your AWS account’s main access key id and secret access key in a specific way.

Create this credentials master file $HOME/.credentials-master.txt in the following format (replacing the values with your own credentials):

Credentials Master File (.credentials-master.txt) download
1
2
AWSAccessKeyId=YOURACCESSKEYIDHERE
AWSSecretKey=YOURSECRETKEYHERE

Note: The above is the sample content of .credentials-master.txt file you are creating, and not shell commands to run.

Protect the above file and set an environment variable to tell AWS tools where to find it:

export AWS_CREDENTIAL_FILE=$HOME/.credentials-master.txt
chmod 600 $AWS_CREDENTIAL_FILE

We can now use the command line tools to create and manage the cloud.

Using ipython

iPython is a beautiful interactive shell for python which you can easily install in a virtualenv. Just type

pip install tornado pyzmq ipython

and then run

ipython notebook --pylab inline

This would open http://127.0.0.1:8888/ in a browser window where you can run python interactively. According to iPython notebook installation, MathJax is not installed by default which can be installed with these steps.

from IPython.external.mathjax import install_mathjax
install_mathjax()

Run Pandora via Terminal

| Comments

If you like Pandora but rather flash and visual ads that go along with it, all you need is pianobar which you can install in one line. Just open terminal and type:

brew install pianobar

Now you can run your flash-free Pandora player in your terminal.

➜  ~  pianobar
Welcome to pianobar (2012.09.07)! Press ? for a list of commands.
[?] Email: lvnilesh@yahoo.com
[?] Password: 
(i) Login... Ok.
(i) Get stations... Ok.
0)     Boston Radio
1)     Guns N' Roses Radio
2)     Kishore Kumar, Mohd. Rafi, Mukesh & Lata Mangeshkar Radio
3)     Lata Mangeshkar Radio
4)     Led Zeppelin Radio
5) q   Michael Jackson Radio
6)  Q  QuickMix
7)     Super Freak Radio
[?] Select station: 5
|>  Station "Michael Jackson Radio" (116177894800507788)
(i) Receiving new playlist... Ok.
|>  "Wanna Be Startin' Somethin'" by "Michael Jackson" on "Thriller"
|>  "Signed, Sealed, Delivered I'm Yours [Alternate Mix]" by "Stevie Wonder" on "The Complete Motown Singles: Volume 10: 1970"
|>  "Freak" by "Chic" on "The Definitive Groove Collection: Chic"
|>  "Brick House" by "The Commodores" on "Colour Collection"
|>  "Thriller" by "Michael Jackson" on "Thriller"
#   -05:34/05:59

Updated with bonus:

Also run last.fm via terminal. Open terminal and type:

brew install shell-fm

and create the file ~/.shell-fm/shell-fm.rc containing this.

username = your-username
password = your-password
default-radio = lastfm://user/your-username/your-station-name

#  for example: lastfm://user/lvnilesh/personal

and run

➜  ~  shell-fm                
Shell.FM v0.8, (C) 2006-2010 by Jonas Kramer
Published under the terms of the GNU General Public License (GPL).

Press ? for help.

Receiving lvnilesh’s Library Radio.
Now playing "Call Me Maybe" by Carly Rae Jepsen.
-00:01

Enjoy!

Mathjax Integration

| Comments

My blog is now mathjax enabled which means I can now write math expressions in plain text markdown. Here is the wave equation by Erwin Schrödinger.

$$ [ i\hbar\frac{\partial \psi}{\partial t} = \frac{-\hbar^2}{2m} \left( \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} + \frac{\partial^2}{\partial z^2} \right) \psi + V \psi ] $$

Here is $\rm \LaTeX$ inline, math representation of a circle ( $\begin{align} x^2 + y^2 = 1 \end{align}$) and here is Euler’s constant. $$ e = \mathop {\lim }\limits_{n \to \infty } \left( {1 + \frac{1}{n}} \right)^n $$

Futher reading

Computational Investing

| Comments

I just discovered an online course on Computational Investing that Prof. Tucker Balch from the College of Computing at Georgia Tech is offering on coursera. It nicely blends my interests in the financial markets and computers so I immediately registered for it. The course has not started yet but for those interested in getting a headstart, here is a quick step-by-step on how I set my computer up with the QuantSoftware ToolKit

Getting the basics down

ruby <(curl -fsSkL raw.github.com/mxcl/homebrew/go)
brew install wget
brew install pyqt # brew installed sip as sip is a dependency
brew install gfortran
brew install gtk
brew install ghostscript
brew install swig

Use a virtual environment for use with QSTK (so it wont mess up existing setup) See my other post on setting up a virtualenv and create a quant virtualenv

mkvirtualenv quant
cd ~/domains/quant

The rest of the steps take place inside the newly created quant virtualenv.

Install numpy from source

pip install -e git+https://github.com/numpy/numpy.git#egg=numpy-dev

Install other dependencies via a requirements.txt file created by pip freeze > requirements.txt from a working installation.

PIP Requirements File (requirements.txt) download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
Cython==0.16
distribute==0.6.28
epydoc==3.0.1
ipython==0.13
lxml==2.3.5
patsy==0.1.0
python-dateutil==1.5
pytz==2012d
pyzmq==2.2.0.1
tornado==2.3
wsgiref==0.1.2
Jinja2==2.6
Pygments==1.5
Sphinx==1.1.3
docutils==0.9.1
readline==6.2.2
six==1.1.0
xlrd==0.8.0
-e git+https://github.com/pydata/pandas.git#egg=pandas-dev
-e git+https://github.com/sympy/sympy.git#egg=sympy-dev
-e git+https://github.com/matplotlib/matplotlib.git#egg=matplotlib-dev
-e git+https://github.com/scipy/scipy.git#egg=scipy-dev

wget http://blog.fungibleclouds.com/downloads/code/requirements.txt
pip install -r requirements.txt

Install statsmodels from source

pip install -e git+https://github.com/statsmodels/statsmodels.git#egg=statsmodels-dev

Install CVXopt from source

pip install cvxopt should work but seems there is a bug with cvxopt.

cd ~/domains/quant/src
wget http://abel.ee.ucla.edu/src/cvxopt-1.1.5.tar.gz
tar zxvf cvxopt-1.1.5.tar.gz
cd cvxopt-1.1.5/src
python setup.py install

Install QSTK

cd ~/domains/quant/
mkdir QSTK
cd QSTK
svn checkout http://svn.quantsoftware.org/openquantsoftware/trunk .

Install QSDATA - sample data from the stock market

wget http://www.quantsoftware.org/QSData.zip
unzip QSData.zip

Configure the qstk specific env variables

cp config.sh local.sh
vi local.sh # edit the $QSDATA env var to point to $QS/QSData/
vi local.sh # edit this to match path of QSTK and QSDATA
	$QS : This is the path to your installation (The location of the Bin, Example, Docs) folders.
	$QSDATA : This is where all the stock data will be.
source local.sh

Test the env variables

echo $QS # would show ~/domains/quant/QSTK
echo $QSDATA # would show ~/domains/quant/QSTK/QSData

Now you are ready to run the QSTK examples

ipython notebook --pylab inline # This will open your default browser http://localhost:8888

Click on new notebook to create a new tab with new empty notebook. In that new notebook, type this code segment to test your setup

import numpy as np
import pandas as pand
import matplotlib.pyplot as plt
from pylab import *
x = np.random.randn(1000)
plt.hist(x,100)
plt.savefig('test.png',format='png')

Press SHIFT-ENTER to see something like this below.

The class is not started yet but here are the two recommended readings that I ordered already.

I am looking forward to applying the learnings from this class to my personal portfolio.

Python Virtualenv

| Comments

Prepare a virtual environment (so it wont mess up existing setup)

sudo easy_install pip
sudo pip install virtualenv virtualenvwrapper
mkdir domains # create a directory to store different virtual environments

Create a temporary text file (say ~/appendthis) with below text

export WORKON_HOME=$HOME/domains
source /usr/local/bin/virtualenvwrapper.sh
export PIP_VIRTUALENV_BASE=$

Append that temp file to ~/.zshenv (or .profile or .bashrc depending on your shell)

cat ~/appendthis >> ~/.zshenv

Exit current shell and start terminal again to see something like this show up:

Linux quant 2.6.32-27-generic #49-Ubuntu SMP Thu Dec 2 00:51:09 UTC 2010 x86_64 GNU/Linux Ubuntu 10.04.1 LTS

Welcome to Ubuntu!
* Documentation:  https://help.ubuntu.com/
Last login: Thu Dec 23 14:35:06 2010 from imac.workgroup
virtualenvwrapper.user_scripts Creating /home/nilesh/domains/initialize
virtualenvwrapper.user_scripts Creating /home/nilesh/domains/premkvirtualenv
virtualenvwrapper.user_scripts Creating /home/nilesh/domains/postmkvirtualenv
virtualenvwrapper.user_scripts Creating /home/nilesh/domains/prermvirtualenv
virtualenvwrapper.user_scripts Creating /home/nilesh/domains/postrmvirtualenv
virtualenvwrapper.user_scripts Creating /home/nilesh/domains/predeactivate
virtualenvwrapper.user_scripts Creating /home/nilesh/domains/postdeactivate
virtualenvwrapper.user_scripts Creating /home/nilesh/domains/preactivate
virtualenvwrapper.user_scripts Creating /home/nilesh/domains/postactivate
virtualenvwrapper.user_scripts Creating /home/nilesh/domains/get_env_details

Now you can create any number of python virtual environments. For example, I create myfirstenv

mkvirtualenv myfirstenv # create my first virtual environment named myfirstenv
pip install BLAH # install BLAH
deactivate # deactivate that virtualenv
rmvirtualenv myfirstenv # remove myfirstenv

To work with virtualenv again, simply type:

workon myfirstenv
cd ~/domains/myfirstenv

Wrappers: Virtualenv provides several useful wrappers that can be used as shortcuts

mkvirtualenv (create a new virtualenv)
rmvirtualenv (remove an existing virtualenv)
workon (change the current virtualenv)
add2virtualenv (add external packages in a .pth file to current virtualenv)
cdsitepackages (cd into the site-packages directory of current virtualenv)
cdvirtualenv (cd into the root of the current virtualenv)
deactivate (deactivate virtualenv, which calls several hooks)

Hooks: One of the coolest things about virtualenvwrapper is the ability to provide hooks when an event occurs. Hook files can be placed in ENV/bin/ and are simply plain-text files with shell commands. virtualenvwrapper provides the following hooks:

postmkvirtualenv
prermvirtualenv
postrmvirtualenv
postactivate
predeactivate
postdeactivate

When you are done with that virtualenv, you can just type

rmvirtualenv myfirstenv # this will destroy that virtualenv named `myfirstenv` under ~/domains

Installing Multiple Versions of Ruby

| Comments

Install XCode command-line tools. Available from the Preferences > Download panel in XCode, or as a separate download from the Apple Developer site.

Install gcc-4.2. Ruby versions before 1.9 (such as 1.8.7 or REE) do not play well with Apple’s LLVM compiler, so you’ll need to install the old gcc-4.2 compiler. It’s available in the homebrew homebrew/dupes repository.

1
2
brew tap homebrew/dupes
brew install apple-gcc42

Install xquartz. The OS X upgrade will also remove your old X11.app installation, so go grab xquartz from http://xquartz.macosforge.org/landing/ and install it (you’ll need v2.7.2 or later for Mountain Lion).

Install Ruby 1.9. This one is simple.

1
rbenv install 1.9.3-p194

Install Ruby 1.8.7. Remember to add the path to the xquartz X11 includes in CPPFLAGS. Here I’m using rbenv, but the same environment variables should work for rvm.

1
CPPFLAGS=-I/opt/X11/include rbenv install 1.8.7-p370

Install ree. Remember to add the path to the xquartz X11 includes in CPPFLAGS and the path to gcc-42 in CC. Here I’m using rbenv, but the same environment variables should work for rvm.

1
CPPFLAGS=-I/opt/X11/include CC=/usr/local/bin/gcc-4.2 rbenv install ree-1.8.7-2012.02

Enjoy your new Ruby versions

1
rbenv versions

ZFS to the Rescue

| Comments

This morning one of my harddisks ada5p2 in the tank pool decided to become unavailable. Even though I store critical data on this pool, I have nothing really to worry about because this ZFS pool is configured as raidz2 - a disk pool that can tolerate two simultaneous disk failures.

# zpool status -v tank
	pool: tank
	state: DEGRADED
	status: One or more devices could not be opened.  Sufficient replicas exist for the pool to continue functioning in a degraded state.
	action: Attach the missing device and online it using 'zpool online'. 
	see: http://www.sun.com/msg/ZFS-8000-2Q
	scrub: scrub in progress for 0h0m, 0.00% done, 73h11m to go
	config:
 
		NAME        STATE     READ WRITE CKSUM
		tank        DEGRADED     0     0     0
		  raidz2    DEGRADED     0     0     0
			ada1p2  ONLINE       0     0     0
			ada2p2  ONLINE       0     0     0
			ada3p2  ONLINE       0     0     0
			ada4p2  ONLINE       0     0     0
			ada5p2  UNAVAIL      3 3.69K     0  cannot open

	errors: No known data errors

Without shutting down my storage system, I just yanked the SATA cable from that broken harddisk and hot replaced it with another of similar size. Now ZFS would resilver that replaced drive on its own in the next couple hours but I was essentially done without any downtime and without any data errors. ZFS is nice indeed.

A cron job periodically scrubbing the zpools helps. ZFS has a built in scrub function that checks for errors and corrects them when possible. Running this task is pretty essential to prevent more errors that aren’t correctable. By default, ZFS doesn’t run this periodically, you have to tell it when to scrub. The easiest way to set up periodic scrubbing is to use crontab, a feature present in all UNIX systems for scheduling background tasks. Start the editing of root user’s crontab by issuing the command crontab -e as root. The crontab is set up by a simple set of commands:

* * * * * command to run
- - - - -
| | | | |
| | | | +----- day of week (0-6) (Sunday is 0)
| | | +------- month (1-12)
| | +--------- day of month (1-31)
| +----------- hour (0-23)
+------------- min (0-59)

For example, I want my system to scrub my tank zpool on Sundays at 04:00 and my twoteebee zpool on Thursdays at 04:00. The specific commands that I put in my crontab are:

0 4 * * 0 /sbin/zpool scrub tank
0 4 * * 4 /sbin/zpool scrub twoteebee

Improving Blog Performance Using AWS S3 + CloudFront

| Comments

When I first switched over to blogging using Octopress, I loaded it up on heroku via git but I was not super satisfied by the site’s performance for world wide audience. It took me a bit of exploring for a good but cost effective way to improve performance using CDN so here is a writeup explaining my setup that might help others.

If you have a blog but haven’t heard of Octopress, you should check it out. It’s great for anyone who likes writing in the text editor of their choice (I currently like IA Writer, and Writing Kit) instead of some web interface, wants to store the work in git, and is comfortable running a few Terminal commands.

Why AWS S3 and CloudFront?

I initially started out hosting my blog using a single Web Dyno, which is a free service offered by heroku for hosting my Octopress blog stored in git. The price was certainly right, but Heroku experienced a bit of downtime over the course of the life of my blog on Heroku and I feel strongly about uptime.

An alternative is using Amazon S3, Amazon’s cloud file storage service. Amazon lets you host a static website on S3 with your own domain name. You can also easily use Amazon CloudFront with S3. CloudFront is a CDN (content distribution network) that serves your content from a worldwide server network and helps to make your website faster.

Setting up S3

If you’ve never used Amazon Web Services before, it can be a little confusing to get started. First, you need to sign up for an AWS account. When you have your account, log into the AWS Management Console and head to the S3 tab. Then:

  • Create a bucket called blog.myowndomain.com. You can not use myowndomain.com so use a subdomain like www or blog.

  • Under the properties for this bucket, you’ll need to go to the Website tab, check the box to enable static web hosting, and set your index and error documents. Your index document should probably be index.html. Your error document could be 404.html (an HTML page for file not found (404) errors). Make a note of your endpoint (http://blog.fungibleclouds.com.s3-website-us-east-1.amazonaws.com/). You’ll need it to create custom origin CloudFront distribution.

  • Create a bucket policy under permissions. Here is my bucket policy.

Setting CloudFront

In AWS Console, go to the CloudFront tab, and create a new Distribution for the S3 website end point as custom origin. This link on custom origin helps. This will mirror your S3 bucket on CloudFront, for example, (http://d2h7g34rdqpc09.cloudfront.net/index.html) shows the home page of my website exactly as it appears on S3.

CloudFront will cache the contents of your S3 bucket for up to 24 hours. This cache is created from S3 the first time someone hits an asset under your CloudFront URL. This means that CloudFront won’t necessarily reflect changes on S3 immediately. You can manually invalidate/expire objects in CloudFront, but it’s easier to just not use it for anything that will change frequently.

Setting up your DNS

You’ll need to create a DNS CNAME alias record to use your own domain with CloudFront that mirrors your S3 bucket. The way you do this depends on your DNS provider (I use Zerigo, which is cheap, reliable, and easy to use). You need to create a CNAME pointing blog.myowndomain.com to your CloudFront endpoint.

After propagation, your DNS results should look something like this.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
nilesh$ dig blog.fungibleclouds.com

; <<>> DiG 9.8.1-P1 <<>> blog.fungibleclouds.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 13827
;; flags: qr rd ra; QUERY: 1, ANSWER: 9, AUTHORITY: 13, ADDITIONAL: 0

;; QUESTION SECTION:
;blog.fungibleclouds.com.	IN	A

;; ANSWER SECTION:
blog.fungibleclouds.com. 900	IN	CNAME	d2h7g34rdqpc09.cloudfront.net.
d2h7g34rdqpc09.cloudfront.net. 59 IN	A	205.251.215.16
d2h7g34rdqpc09.cloudfront.net. 59 IN	A	205.251.215.67
d2h7g34rdqpc09.cloudfront.net. 59 IN	A	205.251.215.91
d2h7g34rdqpc09.cloudfront.net. 59 IN	A	205.251.215.140
d2h7g34rdqpc09.cloudfront.net. 59 IN	A	205.251.215.174
d2h7g34rdqpc09.cloudfront.net. 59 IN	A	205.251.215.176
d2h7g34rdqpc09.cloudfront.net. 59 IN	A	205.251.215.226
d2h7g34rdqpc09.cloudfront.net. 59 IN	A	205.251.215.2

;; AUTHORITY SECTION:
.			491338	IN	NS	j.root-servers.net.
.			491338	IN	NS	a.root-servers.net.
.			491338	IN	NS	g.root-servers.net.
.			491338	IN	NS	k.root-servers.net.
.			491338	IN	NS	c.root-servers.net.
.			491338	IN	NS	f.root-servers.net.
.			491338	IN	NS	e.root-servers.net.
.			491338	IN	NS	h.root-servers.net.
.			491338	IN	NS	d.root-servers.net.
.			491338	IN	NS	l.root-servers.net.
.			491338	IN	NS	i.root-servers.net.
.			491338	IN	NS	b.root-servers.net.
.			491338	IN	NS	m.root-servers.net.

;; Query time: 155 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Fri Aug  3 21:15:39 2012
;; MSG SIZE  rcvd: 420

nilesh$

Pushing your Octopress changes over to S3

This action is fairly simple. First you edit your posts you store in source/_posts. I currently prefer iA Writer so I keep a little executable script I label as ia to invoke it from the terminal.

1
2
3
4
5
#!/bin/bash
for FILE in "$@"
do
	open -a "iA Writer" "$FILE"
done;

Then you generate static HTML for your site.

1
$ rake generate

and finally you push your incremental updates over to S3 using s3cmd in rsync like fashion

1
$ s3cmd sync --reduced-redundancy --recursive --exclude "*.tiff"  --exclude "*.plist" --delete-removed ~/blog.fungibleclouds.com/public/* s3://blog.fungibleclouds.com/ --verbose

Homebrew Mysql

| Comments

Homebrew certainly makes my life simple. This morning it took just 84 seconds to install mysql from source.

1
brew install mysql
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
Set up databases to run AS YOUR USER ACCOUNT with:
    unset TMPDIR
    mysql_install_db --verbose --user=`whoami` --basedir="$(brew --prefix mysql)" --datadir=/usr/local/var/mysql --tmpdir=/tmp

To set up base tables in another folder, or use a different user to run
mysqld, view the help for mysqld_install_db:
    mysql_install_db --help

and view the MySQL documentation:
  * http://dev.mysql.com/doc/refman/5.5/en/mysql-install-db.html
  * http://dev.mysql.com/doc/refman/5.5/en/default-privileges.html

To run as, for instance, user "mysql", you may need to `sudo`:
    sudo mysql_install_db ...options...

Start mysqld manually with:
    mysql.server start

    Note: if this fails, you probably forgot to run the first two steps up above

A "/etc/my.cnf" from another install may interfere with a Homebrew-built
server starting up correctly.

To connect:
    mysql -uroot

To launch on startup:
* if this is your first install:
    mkdir -p ~/Library/LaunchAgents
    cp /usr/local/Cellar/mysql/5.5.25a/homebrew.mxcl.mysql.plist ~/Library/LaunchAgents/
    launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist

* if this is an upgrade and you already have the homebrew.mxcl.mysql.plist loaded:
    launchctl unload -w ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist
    cp /usr/local/Cellar/mysql/5.5.25a/homebrew.mxcl.mysql.plist ~/Library/LaunchAgents/
    launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist

You may also need to edit the plist to use the correct "UserName".

==> Summary
/usr/local/Cellar/mysql/5.5.25a: 6382 files, 222M, built in 84 seconds

Set up db to run as your user account

1
2
unset TMPDIR
mysql_install_db --verbose --user=`whoami` --basedir="$(brew --prefix mysql)" --datadir=/usr/local/var/mysql --tmpdir=/tmp

Start the server

1
mysql.server start

Secure the installation

1
mysql_secure_installation

Make sure to let mysql launch on startup

1
2
3
mkdir -p ~/Library/LaunchAgents
cp /usr/local/Cellar/mysql/5.5.25a/homebrew.mxcl.mysql.plist ~/Library/LaunchAgents/
launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist

Make sure to check the plist to use the correct user

1
vi ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist