Wednesday, December 21, 2011

Checkout a specific revision from subversion from command line


svn checkout svn://somepath@1234 working-directory
or
svn checkout url://repository/path@1234
or
svn checkout -r 1234 url://repository/path

Friday, December 9, 2011

UTF8 encode decode


/* Licensed to the Apache Software Foundation (ASF) under one or more
         * contributor license agreements.  See the NOTICE file distributed with
         * this work for additional information regarding copyright ownership.
         * The ASF licenses this file to You under the Apache License, Version 2.0
         * (the "License"); you may not use this file except in compliance with
         * the License.  You may obtain a copy of the License at
         *
         *     http://www.apache.org/licenses/LICENSE-2.0
         *
         * Unless required by applicable law or agreed to in writing, software
         * distributed under the License is distributed on an "AS IS" BASIS,
         * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
         * See the License for the specific language governing permissions and
         * limitations under the License.
         */

        package org.apache.harmony.nio_char.tests.java.nio.charset;

        import dalvik.annotation.TestTargetClass;
        import dalvik.annotation.TestTargets;
        import dalvik.annotation.TestTargetNew;
        import dalvik.annotation.TestLevel;

        import java.io.IOException;
        import java.nio.BufferOverflowException;
        import java.nio.ByteBuffer;
        import java.nio.CharBuffer;
        import java.nio.charset.Charset;
        import java.nio.charset.CharsetDecoder;
        import java.nio.charset.CharsetEncoder;
        import java.nio.charset.CoderMalfunctionError;
        import java.nio.charset.CoderResult;

        import junit.framework.TestCase;

        @TestTargetClass(CharsetEncoder.class)
        public class CharsetEncoderTest extends TestCase {

            /**
             * @tests java.nio.charset.CharsetEncoder.CharsetEncoder(
             *        java.nio.charset.Charset, float, float)
             */
            @TestTargets({@TestTargetNew(level=TestLevel.PARTIAL_COMPLETE,notes="Checks IllegalArgumentException",method="CharsetEncoder",args={java.nio.charset.Charset.class,float.class,float.class}),@TestTargetNew(level=TestLevel.PARTIAL_COMPLETE,notes="Checks IllegalArgumentException",method="CharsetEncoder",args={java.nio.charset.Charset.class,float.class,float.class,byte[].class})})
            public void test_ConstructorLjava_nio_charset_CharsetFF() {
                // Regression for HARMONY-141
                try {
                    Charset cs = Charset.forName("UTF-8");
                    new MockCharsetEncoderForHarmony141(cs, 1.1f, 1);
                    fail("Assert 0: Should throw IllegalArgumentException.");
                } catch (IllegalArgumentException e) {
                    // expected
                }

                try {
                    Charset cs = Charset.forName("ISO8859-1");
                    new MockCharsetEncoderForHarmony141(cs, 1.1f, 1,
                            new byte[] { 0x1a });
                    fail("Assert 1: Should throw IllegalArgumentException.");
                } catch (IllegalArgumentException e) {
                    // expected
                }
            }

            /**
             * @tests java.nio.charset.CharsetEncoder.CharsetEncoder(
             *        java.nio.charset.Charset, float, float)
             */
            @TestTargetNew(level=TestLevel.PARTIAL_COMPLETE,notes="",method="CharsetEncoder",args={java.nio.charset.Charset.class,float.class,float.class})
            public void test_ConstructorLjava_nio_charset_CharsetNull() {
                // Regression for HARMONY-491
                CharsetEncoder ech = new MockCharsetEncoderForHarmony491(null,
                        1, 1);
                assertNull(ech.charset());
            }

            /**
             * Helper for constructor tests
             */

            public static class MockCharsetEncoderForHarmony141 extends
                    CharsetEncoder {

                protected MockCharsetEncoderForHarmony141(Charset cs,
                        float averageBytesPerChar, float maxBytesPerChar) {
                    super (cs, averageBytesPerChar, maxBytesPerChar);
                }

                public MockCharsetEncoderForHarmony141(Charset cs,
                        float averageBytesPerChar, float maxBytesPerChar,
                        byte[] replacement) {
                    super (cs, averageBytesPerChar, maxBytesPerChar, replacement);
                }

                protected CoderResult encodeLoop(CharBuffer in, ByteBuffer out) {
                    return null;
                }
            }

            public static class MockCharsetEncoderForHarmony491 extends
                    CharsetEncoder {

                public MockCharsetEncoderForHarmony491(Charset arg0,
                        float arg1, float arg2) {
                    super (arg0, arg1, arg2);
                }

                protected CoderResult encodeLoop(CharBuffer arg0,
                        ByteBuffer arg1) {
                    return null;
                }

                public boolean isLegalReplacement(byte[] arg0) {
                    return true;
                }
            }

            /*
             * Test malfunction encode(CharBuffer)
             */
            @TestTargetNew(level=TestLevel.PARTIAL,notes="Regression test checks CoderMalfunctionError",method="encode",args={java.nio.CharBuffer.class})
            public void test_EncodeLjava_nio_CharBuffer() throws Exception {
                MockMalfunctionCharset cs = new MockMalfunctionCharset("mock",
                        null);
                try {
                    cs.encode(CharBuffer.wrap("AB"));
                    fail("should throw CoderMalfunctionError");
                } catch (CoderMalfunctionError e) {
                    // expected
                }
            }

            /*
             * Mock charset class with malfunction decode & encode.
             */
            static final class MockMalfunctionCharset extends Charset {

                public MockMalfunctionCharset(String canonicalName,
                        String[] aliases) {
                    super (canonicalName, aliases);
                }

                public boolean contains(Charset cs) {
                    return false;
                }

                public CharsetDecoder newDecoder() {
                    return Charset.forName("UTF-8").newDecoder();
                }

                public CharsetEncoder newEncoder() {
                    return new MockMalfunctionEncoder(this );
                }
            }

            /*
             * Mock encoder. encodeLoop always throws unexpected exception.
             */
            static class MockMalfunctionEncoder extends
                    java.nio.charset.CharsetEncoder {

                public MockMalfunctionEncoder(Charset cs) {
                    super (cs, 1, 3, new byte[] { (byte) '?' });
                }

                protected CoderResult encodeLoop(CharBuffer in, ByteBuffer out) {
                    throw new BufferOverflowException();
                }
            }

            /*
             * Test reserve bytes encode(CharBuffer,ByteBuffer,boolean)
             */
            @TestTargetNew(level=TestLevel.PARTIAL,notes="Functional test.",method="encode",args={java.nio.CharBuffer.class,java.nio.ByteBuffer.class,boolean.class})
            public void test_EncodeLjava_nio_CharBufferLjava_nio_ByteBufferB() {
                CharsetEncoder encoder = Charset.forName("utf-8").newEncoder();
                CharBuffer in1 = CharBuffer.wrap("\ud800");
                CharBuffer in2 = CharBuffer.wrap("\udc00");
                ByteBuffer out = ByteBuffer.allocate(4);
                encoder.reset();
                CoderResult result = encoder.encode(in1, out, false);
                assertEquals(4, out.remaining());
                assertTrue(result.isUnderflow());
                result = encoder.encode(in2, out, true);
                assertEquals(4, out.remaining());
                assertTrue(result.isMalformed());
            }

            /**
             * @tests {@link java.nio.charset.Charset#encode(java.nio.CharBuffer)
             */
            public void testUtf8Encoding() throws IOException {
                byte[] orig = new byte[] { (byte) 0xed, (byte) 0xa0,
                        (byte) 0x80 };
                String s = new String(orig, "UTF-8");
                assertEquals(1, s.length());
                assertEquals(55296, s.charAt(0));
                Charset.forName("UTF-8").encode(CharBuffer.wrap(s));
                //        ByteBuffer buf = <result>
                //        for (byte o : orig) {
                //            byte b = 0;
                //            buf.get(b);
                //            assertEquals(o, b);
                //        }
            }
        }


http://www.java2s.com/Open-Source/Android/android-core/platform-libcore/org/apache/harmony/nio_char/tests/java/nio/charset/CharsetEncoderTest.java.htm


source : 

Sunday, November 20, 2011

creating header and footer for a existing pdf with iText


import java.awt.Color;
import java.io.FileOutputStream;

import com.itextpdf.text.Anchor;
import com.itextpdf.text.BaseColor;
import com.itextpdf.text.Chunk;
import com.itextpdf.text.Document;
import com.itextpdf.text.Font;
import com.itextpdf.text.FontFactory;
import com.itextpdf.text.Paragraph;
import com.itextpdf.text.Phrase;
import com.itextpdf.text.pdf.PdfContentByte;
import com.itextpdf.text.pdf.PdfImportedPage;
import com.itextpdf.text.pdf.PdfPTable;
import com.itextpdf.text.pdf.PdfReader;
import com.itextpdf.text.pdf.PdfWriter;

public class PDF {

/**
* @param args
*/
public static void main(String[] args) {
Document document = new Document();
try {
PdfWriter writer = PdfWriter.getInstance(document,
new FileOutputStream("/home/zana/Desktop/v59-10.pdf"));

document.open();

PdfContentByte cb = writer.getDirectContent();

// Load existing PDF
PdfReader reader = new PdfReader(new PdfReader(
"/home/zana/Desktop/v59-9.pdf"));
PdfImportedPage page = writer.getImportedPage(reader, 1);

// Copy first page of existing PDF into output PDF
document.newPage();
cb.addTemplate(page, 0, 0);

// Add your new data / text here
// for example...
Font font = new Font();
font.setColor(BaseColor.BLUE);
font.setStyle(Font.UNDERLINE);
Paragraph paragraph = new Paragraph();
paragraph.setLeading(0, 25);
paragraph.setAlignment(Paragraph.ALIGN_LEFT);
paragraph.setAlignment(Paragraph.ALIGN_BASELINE);
Chunk chunk = new Chunk("http://www.geek-tutorials.com", font)
.setAnchor("http://www.geek-tutorials.com");
paragraph.add(chunk);
document.add(paragraph);

document.close();
} catch (Exception e) {
e.printStackTrace();
}
}

}

Tuesday, November 15, 2011

Turkish company builds 65-inch Android 'tablet' with Honeycomb, 1080p support (video)



Want Honeycomb on your TV? You can take your chances with a Google TV-enabled set from Sony, or you can get the full Android experience by adding a connected tablet to your HD mix -- if Istanbul-based Ardic gets its solution out the door, at least. The Turkish company's prototype uses a 10-inch Android Honeycomb-based tablet to power a 65-inch LCD with 1080p support for basic gestures, like pinch and zoom. The display currently has two touch sensors, but a version with four sensors is on the way, which will bring multi-touch support. The tablet is powered by an NVIDIA Tegra 2 SoC, and includes 1GB of RAM, 16GB of flash memory, dual cameras, HDMI, USB, microSD and 3G and WiFi connectivity. A dock enables instant connectivity with the OEM TV, including HDMI for video and audio, and USB for touch input (a wireless version is in the works as well).

The devs customized Android to support 1080p output, and it appears to work quite seamlessly, as you'll see in the embedded video. And this isn't simply another goofy demo or proof of concept -- the Turkish company is in talks with education and enterprise customers and hopes to bring this setup to production as a more power- and cost-efficient smart board alternative. The company eventually hopes to offer displays in a variety of sizes, that will all be powered by a pocketable device, such as a smartphone, but watch in wonder as the 65-inch proto we have today struts its stuff in the video after the break.

source : http://www.engadget.com/2011/11/14/turkish-company-builds-65-inch-android-tablet-with-honeycomb/

Friday, November 11, 2011

Default MYSQL Engine for MYSQL Cluster


I am using mac and I installed mysql using homebrew.
brew install mysql
pretty standard installation.
mysql> show engines;
+------------+---------+------------------------------------------------------------+--------------+------+------------+
| Engine     | Support | Comment                                                    | Transactions | XA   | Savepoints |
+------------+---------+------------------------------------------------------------+--------------+------+------------+
| MRG_MYISAM | YES     | Collection of identical MyISAM tables                      | NO           | NO   | NO         |
| CSV        | YES     | CSV storage engine                                         | NO           | NO   | NO         |
| MyISAM     | DEFAULT | Default engine as of MySQL 3.23 with great performance     | NO           | NO   | NO         |
| InnoDB     | YES     | Supports transactions, row-level locking, and foreign keys | YES          | YES  | YES        |
| MEMORY     | YES     | Hash based, stored in memory, useful for temporary tables  | NO           | NO   | NO         |
+------------+---------+------------------------------------------------------------+--------------+------+------------+
I would like innodb to be the default storage engine. What do I need to do?



Under [mysqld] section in your ini file, add:
default-storage-engine = innodb
It is usually /etc/my.cnf, but not sure about Mac.
From the docs:
On Unix, Linux and Mac OS X, MySQL programs read startup options from the following files, in the specified order (top items are used first).
File Name   Purpose
/etc/my.cnf          Global options/etc/mysql/my.cnf    Global options (as of MySQL 5.1.15)
SYSCONFDIR/my.cnf    Global options$MYSQL_HOME/my.cnf   Server-specific options
defaults-extra-file  The file specified with --defaults-extra-file=path, if any
~/.my.cnf            User-specific options
The last one is never used by the daemon.




http://linuxgazette.net/168/nielsen.html

MYSQL User ADD


#
# Connect to the local database server as user root
# You will be prompted for a password.
#
mysql -h localhost  -u root -p

#
# Now we see the 'mysql>' prompt and we can run
# the following to create a new database for Paul.
#
mysql> create database pauldb;
Query OK, 1 row affected (0.00 sec)

#
# Now we create the user paul and give him full 
# permissions on the new database
mysql> grant CREATE,INSERT,DELETE,UPDATE,SELECT on pauldb.* to paul@localhost;
Query OK, 0 rows affected (0.00 sec)

#
# Next we set a password for this new user
#
mysql> set password for paul = password('mysecretpassword');
Query OK, 0 rows affected (0.00 sec)

#
# Cleanup and ext
mysql> flush privileges;
mysql> exit;

MYSQL Cluster On Ubuntu


This article will take you through setting up MySql cluster. As we will be using mysql-cluster tar package, this guide should work with most distros, including fedora, ubuntu. I am testing cluster on centos.
For this you need 3 servers, two of which will work as storage cluster & one as management server.
cluster1 192.168.1.2
cluster2 192.168.1.3
cluster3 192.168.1.1 – management server
Download mysql-cluster
(here we are using mysql-cluster-gpl-7.1.9a-linux-i686-glibc23.tar.gz as latest one available while this post was written.)
Install MySQL on the first two servers ON CLUSTER1 & CLUSTER2
#Create Mysql user
groupadd mysql
useradd -g mysql mysql
 
cd /usr/local/
tar zxvf /downloads/mysql-cluster-gpl-7.1.9a-linux-i686-glibc23.tar.gz
ln -s mysql-cluster-gpl-7.1.9a-linux-i686-glibc23 mysql
 
cd /usr/local/mysql
scripts/mysql_install_db --user=mysql
 
#change permission
chown -R root  .
chown -R mysql data
chgrp -R mysql .
 
#init script
cp support-files/mysql.server /etc/rc.d/init.d/
chmod +x /etc/rc.d/init.d/mysql.server
chkconfig --add mysql.server
Install and configure the management server ON CLUSTER3
cd /usr/src
tar zxvf /downloads/mysql-cluster-gpl-7.1.9a-linux-i686-glibc23.tar.gz
cd mysql-cluster-gpl-7.1.9a-linux-i686-glibc23/
mv bin/ndb_mgm .
mv bin/ndb_mgmd .
chmod +x ndb_mg*
mv ndb_mg* /usr/bin/
 
cd
rm -rf /usr/src/mysql-cluster-gpl-7.1.9a-linux-i686-glibc23/
You now need to set up the config file ON CLUSTER3
mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
 
vi config.ini
[NDBD DEFAULT]
NoOfReplicas=2
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
# Managment Server
[NDB_MGMD]
HostName=192.168.1.1  # the IP of THIS SERVER
# Storage Engines
[NDBD]
HostName=192.168.1.2  # the IP of the FIRST SERVER
DataDir= /var/lib/mysql-cluster
[NDBD]
HostName=192.168.1.3 # the IP of the SECOND SERVER
DataDir=/var/lib/mysql-cluster
# 2 MySQL Clients
[MYSQLD]
[MYSQLD]
mkdir -p /usr/local/mysql/mysql-cluster
Now, start the managment server:
ndb_mgmd
Configure the storage/SQL servers and start mysql ON CLUSTER1 & CLUSTER2
vi /etc/my.cnf
[mysqld]
ndbcluster
ndb-connectstring=192.168.1.1 # the IP of the MANAGMENT (THIRD) SERVER
[mysql_cluster]
ndb-connectstring=192.168.1.1 # the IP of the MANAGMENT (THIRD) SERVER
 
#Now, we make the data directory and start the storage engine:
mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
/usr/local/mysql/bin/ndbd --initial
/etc/rc.d/init.d/mysql.server start
Check if its working ON CLUSTER3 using ndb_mgm
[root@c3 mysql-cluster]# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.1.2  (mysql-5.1.51 ndb-7.1.9, Nodegroup: 0, Master)
id=3 @192.168.1.3  (mysql-5.1.51 ndb-7.1.9, Nodegroup: 0)
 
[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.1.1  (mysql-5.1.51 ndb-7.1.9)
 
[mysqld(API)] 2 node(s)
id=4 @192.168.1.2  (mysql-5.1.51 ndb-7.1.9)
id=5 @192.168.1.3  (mysql-5.1.51 ndb-7.1.9)
Stop Cluster
To stop cluster run shutdown command in ndb_mgm Console
On storage clusters /etc/init.d/mysql.server stop
Start Cluster
Management Server -Cluster3
ndb_mgmd -f /var/lib/mysql-cluster/config.ini
Storage Clusters -Cluster1 & Cluster2
/usr/local/mysql/bin/ndbd
/etc/init.d/mysql.server start
While creating tables make sure ENGINE=NDBCLUSTER
---------------------------------------------------------------------------------------------------------------
Unix Consulting / Cisco Consulting / Linux Consulting / MySQL Consulting / Solaris Consulting


MySQL Cluster Server Setup
Version 1.0 - 2/11/2005

LOD Communications, Inc.
(800) 959-6641
http://www.lod.com

Introduction
MySQL Cluser Server is a fault-tolerant, redundant, scalable database architecture built on the open-source MySQL application, and capable of delivering 99.999% reliability. In this paper we describe the process we used to setup, configure, and test a three-node mySQL cluster server in a test environment.

Schematic



Hardware
We used four Sun Ultra Enterprise servers in our test environment, but the process for setting up a mySQL cluster server on other UNIX- or Linux-based platforms is very similar, and this setup guide should be applicable with little or no modification.

Our four machines each fall into one of three roles:

1. storage nodes (mysql-ndb-1 and mysql-ndb-2)
2. API node (mysql-api-1)
3. management server and management console (mgmt)

Note that the storage nodes are also API nodes, but the API node is not a storage node. The API node is a full member of the cluster, but it does not store any cluster data, and its state (whether it is up or down) does not affect the integrity or availablility of the data on the storage nodes. It can be thought of as a "client" of the cluster. Applications such as web servers live on the API nodes and communicate with the mySQL server process running locally, on the API node itself, which takes care of fetching data from the storage nodes. The storage nodes are API nodes as well, and technically additional applications could be installed there and communicate with the cluster via the mySQL server processes running on them, but for management and performance reasons this probably should be considered a sub-optimal configuration in a production environment.

Software
Sun Solaris 8 operating system
mysql-max-4.1.9

We used the precompiled binary distribution of mySQL server for Sun SPARC Solaris 8. Obviously, for implementation on other platforms, the appropriate binary distribution should be used. In all cases, the "max" mySQL distribution is required. The mySQL 4.1 download page can be found here.

Procedure

Step 1. On both storage nodes, mysql-ndb-1 (192.168.0.33) and mysql-ndb-2 (192.168.0.34)obtain and install mySQL server:
mysql-ndb-1groupadd mysql
mysql-ndb-1# useradd -g mysql mysqlmysql-ndb-1# cd /usr/localmysql-ndb-1# wget http://dev.mysql.com/get/Downloads/MySQL-4.1/mysql-max-4.1.9-sun-solaris2.8-sparc.tar.gz/from/http://mysql.he.net/
mysql-ndb-1# gzip -dc mysql-max-4.1.9-sun-solaris2.8-sparc.tar.gz | tar xvf -
mysql-ndb-1# ln -s mysql-max-4.1.9-sun-solaris2.8-sparc mysql
mysql-ndb-1# cd mysql
mysql-ndb-1scripts/mysql_install_db --user=mysql
mysql-ndb-1# chown -R root  .
mysql-ndb-1# chown -R mysql data
mysql-ndb-1# chgrp -R mysql .
mysql-ndb-1# cp support-files/mysql.server /etc/init.d/mysql.server
mysql-ndb-2groupadd mysql
mysql-ndb-2# useradd -g mysql mysqlmysql-ndb-2# cd /usr/localmysql-ndb-2# wget http://dev.mysql.com/get/Downloads/MySQL-4.1/mysql-max-4.1.9-sun-solaris2.8-sparc.tar.gz/from/http://mysql.he.net/
mysql-ndb-2# gzip -dc mysql-max-4.1.9-sun-solaris2.8-sparc.tar.gz | tar xvf -
mysql-ndb-2# ln -s mysql-max-4.1.9-sun-solaris2.8-sparc mysql
mysql-ndb-2# cd mysql
mysql-ndb-2scripts/mysql_install_db --user=mysql
mysql-ndb-2# chown -R root  .
mysql-ndb-2# chown -R mysql data
mysql-ndb-2# chgrp -R mysql .
mysql-ndb-2# cp support-files/mysql.server /etc/init.d/mysql.server

    Do not start the mysql servers yet.

Step 2. Setup the management server and management console on host mgmt (192.168.0.32). This requires only two executables be extracted form the mysql distribution. The rest can be deleted.
mgmt# gzip -dc mysql-max-4.1.9-sun-solaris2.8-sparc.tar.gz | tar xvf -
mgmt# cp mysql-max-4.1.9-sun-solaris2.8-sparc/bin/ndb_mgm /usr/bin
mgmt# cp mysql-max-4.1.9-sun-solaris2.8-sparc/bin/ndb_mgmd /usr/bin
mgmt# rm -r mysql-max-4.1.9-sun-solaris2.8-sparc
mgmt# mkdir /var/lib/mysql-cluster
mgmt# cd /var/lib/mysql-cluster
mgmt# vi config.ini

    The file config.ini contains configuration information for the cluster:
[NDBD DEFAULT]
NoOfReplicas=2
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
# Management Server
[NDB_MGMD]
HostName=192.168.0.32           # IP address of this server
# Storage Nodes
[NDBD]
HostName=192.168.0.33           # IP address of storage-node-1
DataDir= /var/lib/mysql-cluster
[NDBD]
HostName=192.168.0.34           # IP address of storage-node-2
DataDir=/var/lib/mysql-cluster
# Setup node IDs for mySQL API-servers (clients of the cluster)
[MYSQLD]
[MYSQLD]
[MYSQLD]
[MYSQLD]

    Start the management server and verify that it is running:
mgmt# ndb_mgmd
mgmt# ps -ef | grep [n]db

Step 3. On both storage nodes, 
mysql-ndb-1 (192.168.0.33) and mysql-ndb-2 (192.168.0.34), configure the mySQL servers:

mysql-ndb-1# vi /etc/my.cnf
mysql-ndb-2# vi /etc/my.cnf
    This is the configuration file (/etc/my.cnf) for the mysql server on both storage nodes:

[mysqld]
ndbcluster
ndb-connectstring='host=192.168.0.32'    # IP address of the management server
[mysql_cluster]
ndb-connectstring='host=192.168.0.32'    # IP address of the management server
    On both storage nodes, start the NDB storage engine and mysql server and verify that they are running:
mysql-ndb-1# mkdir /var/lib/mysql-cluster
mysql-ndb-1# cd /var/lib/mysql-cluster
mysql-ndb-1# /usr/local/mysql/bin/ndbd --initialmysql-ndb-1# /etc/init.d/mysql.server start
mysql-ndb-1# ps -ef | grep [n]dbd

mysql-ndb-1# ps -ef | grep [m]ysqld

mysql-ndb-2# mkdir /var/lib/mysql-cluster
mysql-ndb-2# cd /var/lib/mysql-cluster
mysql-ndb-2# /usr/local/mysql/bin/ndbd --initialmysql-ndb-2# /etc/init.d/mysql.server start
mysql-ndb-2# ps -ef | grep [n]dbd

mysql-ndb-2# ps -ef | grep [m]ysqld

    If the mysql server did not startup properly, check the logfile in /usr/local/mysql/data/${HOSTNAME}.err and correct the problem.

Step 4. Start the management console on the management server machine (mgmt) and query the status of the cluster:

mgmt# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @192.168.0.33  (Version: 4.1.9, starting, Nodegroup: 0, Master)
id=3    @192.168.0.34  (Version: 4.1.9, starting, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @192.168.0.32  (Version: 4.1.9)

[mysqld(API)]   4 node(s)
id=4 (not connected, accepting connect from any host)
id=5 (not connected, accepting connect from any host)
id=6 (not connected, accepting connect from any host)
id=7 (not connected, accepting connect from any host)

Step 5. Create a test database, populate a table using the NDBCLUSTER engine, and verify correct operation:
    On both storage nodes mysql-ndb-1 and mysql-ndb-2 create the test database:

mysql-ndb-1# mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1 to server version: 4.1.9-max

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> create database foo;
Query OK, 1 row affected (0.09 sec)




mysql-ndb-2# mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 6 to server version: 4.1.9-max

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> create database foo;
Query OK, 1 row affected (0.13 sec)
    Back on storage node mysql-ndb-1, populate the database with a table containing some simple data:

mysql-ndb-1# mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1 to server version: 4.1.9-max

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> create database foo;
Query OK, 1 row affected (0.09 sec)

mysql> use foo;
Database changed
mysql> create table test1 (i int) engine=ndbcluster;
Query OK, 0 rows affected (0.94 sec)

mysql> insert into test1 () values (1);
Query OK, 1 row affected (0.02 sec)

mysql> select * from test1;
+------+
| i    |
+------+
|    1 |
+------+
1 row in set (0.01 sec)
    Now go to storage node mysql-ndb-2 and verify that the data is accessible:

mysql-ndb-2# mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 7 to server version: 4.1.9-max

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> use foo;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select * from test1;
+------+
| i    |
+------+
|    1 |
+------+
1 row in set (0.00 sec)
    This is a good sign, but note that it does not actually prove that the data is being replicated. The storage node (mysql-ndb-2) is also a cluster API node, and this test merely shows that it is able to retrieve data from the cluster. It demonstrates nothing with respect to the underlying storage mechanism in the cluster. This can be more clearly demonstrated with the following test.

    Kill off the NDB engine process (ndbd) on one of the storage nodes (mysql-ndb-2) in order to simulate failure of the storage engine:

mysql-ndb-2# ps -ef | grep [n]db
    root  3035  3034  0 17:28:41 ?        0:23 /usr/local/mysql/bin/ndbd --initial
    root  3034     1  0 17:28:41 ?        0:00 /usr/local/mysql/bin/ndbd --initial
mysql-ndb-2kill -TERM 3034 3035
mysql-ndb-2ps -ef | grep [n]db
    The management server will recognize that the storage engine on mysql-ndb-2 (192.168.0.34) has failed, but his API connection is still active:

ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @192.168.0.33  (Version: 4.1.9, Nodegroup: 0)
id=3 (not connected, accepting connect from 192.168.0.34)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @192.168.0.32  (Version: 4.1.9)

[mysqld(API)]   4 node(s)
id=4    @192.168.0.33  (Version: 4.1.9)
id=5    @192.168.0.34  (Version: 4.1.9)
id=6 (not connected, accepting connect from any host)
id=7 (not connected, accepting connect from any host)
    On the first storage node (mysql-ndb-1) populate another new table with some test data:

mysql-ndb-1# mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 4 to server version: 4.1.9-max

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> use foo;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> create table test2 (i int) engine=ndbcluster;
Query OK, 0 rows affected (1.00 sec)

mysql> insert into test2 () values (2);
Query OK, 1 row affected (0.01 sec)

mysql> select * from test2;
+------+
| i    |
+------+
|    2 |
+------+
1 row in set (0.01 sec)
    Back on the second storage node (mysql-ndb-2) perform the same select command:

mysql-ndb-2# mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 9 to server version: 4.1.9-max

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> use foo;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select * from test2;
+------+
| i    |
+------+
|    2 |
+------+
1 row in set (0.01 sec)
    The storage engine and the API server are two separate, distinct processes that are not inherently dependent on one another. Once the ndbd storage engine process is restarted on the second storage node, the data is replicated, as the following test demonstrates.

    First, restart the storage engine process on 
mysql-ndb-2:

mysql-ndb-2# /usr/local/mysql/bin/ndbd
    Next, shutdown the storage engine on mysql-ndb-1 either using the management console or command line kill:

mgmt# ndb_mgm
ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @192.168.0.33  (Version: 4.1.9, Nodegroup: 0, Master)
id=3    @192.168.0.34  (Version: 4.1.9, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @192.168.0.32  (Version: 4.1.9)

[mysqld(API)]   4 node(s)
id=4    @192.168.0.33  (Version: 4.1.9)
id=5    @192.168.0.34  (Version: 4.1.9)
id=6 (not connected, accepting connect from any host)
id=7 (not connected, accepting connect from any host)

ndb_mgm> 2 stop
Node 2 has shutdown.
    Now, to determine if the SQL data was replicated when the storage engine on mysql-ndb-2 was restarted, try the query on either (or both) hosts:

mysql-ndb-1# mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 5 to server version: 4.1.9-max

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> use foo;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select * from test2;
+------+
| i    |
+------+
|    2 |
+------+
1 row in set (0.01 sec)




mysql-ndb-2# mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 10 to server version: 4.1.9-max

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> use foo;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select * from test2;
+------+
| i    |
+------+
|    2 |
+------+
1 row in set (0.01 sec)
    This shows that the data is being replicated on both storage nodes. Restart the storage engine on mysql-ndb-1:

mysql-ndb-1# /usr/local/mysql/bin/ndbd


Step 6. Next, we add a cluster API node. This node is a full member of the cluster, but does not run the NDB storage engine. Data is not replicated on this node, and it functions essentially as a "client" of the cluster server. Typically, we would install applications that require access to the mySQL data (web servers, etc) on this machine. The applications talk to the mySQL server on localhost, which then handles the underlying communication with the cluster in order to fetch the requested data.

    First, install the mysql server on the API node mysql-api-1 (192.168.0.35):

mysql-api-1groupadd mysql
mysql-api-1# useradd -g mysql mysqlmysql-api-1# cd /usr/localmysql-api-1# wget http://dev.mysql.com/get/Downloads/MySQL-4.1/mysql-max-4.1.9-sun-solaris2.8-sparc.tar.gz/from/http://mysql.he.net/
mysql-api-1# gzip -dc mysql-max-4.1.9-sun-solaris2.8-sparc.tar.gz | tar xvf -
mysql-api-1# ln -s mysql-max-4.1.9-sun-solaris2.8-sparc mysql
mysql-api-1# cd mysql
mysql-api-1scripts/mysql_install_db --user=mysql
mysql-api-1# chown -R root  .
mysql-api-1# chown -R mysql data
mysql-api-1# chgrp -R mysql .
mysql-api-1# cp support-files/mysql.server /etc/init.d/mysql.server
    Install a simple /etc/my.cnf file:

[mysqld]
ndbcluster
ndb-connectstring='host=192.168.0.32'    # IP address of the management server
[mysql_cluster]
ndb-connectstring='host=192.168.0.32'    # 
IP address of the management server
    Now start the mySQL server:

mysql-api-1# /etc/init.d/mysql.server start
    Perform some test queries on the database tables we created earlier:

mysql-api-1# mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1 to server version: 4.1.9-max

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> create database foo;
Query OK, 1 row affected (0.11 sec)

mysql> use foo;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select * from test1;
+------+
| i    |
+------+
|    1 |
+------+
1 row in set (0.01 sec)

mysql> select * from test2;
+------+
| i    |
+------+
|    2 |
+------+
1 row in set (0.01 sec)
    At this point you can check the cluster status on the management console and verify that the API node is now connected:

ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @192.168.0.33  (Version: 4.1.9, Nodegroup: 0)
id=3    @192.168.0.34  (Version: 4.1.9, Nodegroup: 0, Master)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @192.168.0.32  (Version: 4.1.9)

[mysqld(API)]   4 node(s)
id=4   (Version: 4.1.9)
id=5   (Version: 4.1.9)
id=6    @192.168.0.35  (Version: 4.1.9)
id=7 (not connected, accepting connect from any host)

    Our configuration now resembles the diagram at the top of the page.
Step 7
Finally, we should verify the fault-tolerance of the cluster when servicing queries from the API node.

    With the cluster up and operating corrrectly, use the API node to create a new table and insert some test data:

mysql-api-1# mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 258519 to server version: 4.1.9-max

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> use foo;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> create table test3 (i int) engine=ndbcluster;
Query OK, 0 rows affected (0.81 sec)

mysql> quit
Bye

    Now, insert some random data into the table, either by hand or you can use a quick script to do it:

#!/bin/sh
for i in 1 2 3 4 5 6 7 8 9 10
do
        random=`perl -e "print int(rand(100));"`
        echo "use foo; insert into test3 () values ($random);" | mysql -u root
done
    Try a test query on the API node:

mysql-api-1# mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 258551 to server version: 4.1.9-max

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> use foo;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select * from test3;
+------+
| i    |
+------+
|   92 |
|   20 |
|   18 |
|   84 |
|   49 |
|   22 |
|   54 |
|   91 |
|   79 |
|   52 |
+------+
10 rows in set (0.02 sec)
    Looks good. Now, disconnect the network cable from the first storage node so that it falls out of the cluster. Within a few seconds, the management console will recognize that it has disappeared:

ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2 (not connected, accepting connect from 192.168.0.33)
id=3    @192.168.0.34  (Version: 4.1.9, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @192.168.0.32  (Version: 4.1.9)

[mysqld(API)]   4 node(s)
id=4 (not connected, accepting connect from any host)
id=5   (Version: 4.1.9)
id=6    @192.168.0.35  (Version: 4.1.9)
id=7 (not connected, accepting connect from any host)
    Is the cluster data still available to the API node?

mysql-api-1# mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 258552 to server version: 4.1.9-max

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> use foo;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select * from test3;
+------+
| i    |
+------+
|   54 |
|   91 |
|   79 |
|   52 |
|   92 |
|   20 |
|   18 |
|   84 |
|   49 |
|   22 |
+------+
10 rows in set (0.02 sec)
    Now, plug the disconnected storage node back into the network. It will attempt to rejoin the cluster, but probably will be shutdown by the management server, and something similar to the following will appear in the error log (/var/lib/mysql-cluster/mdb_2_error.log):

Date/Time: Saturday 12 February 2005 - 12:46:21
Type of error: error
Message: Arbitrator shutdown
Fault ID: 2305
Problem data: Arbitrator decided to shutdown this node
Object of reference: QMGR (Line: 3796) 0x0000000a
ProgramName: /usr/local/mysql/bin/ndbd
ProcessID: 1185
TraceFile: /var/lib/mysql-cluster/ndb_2_trace.log.3
***EOM***

    Restart the ndb storage engine process on that node and verify that it rejoins the cluster properly:

mysql-ndb-1# /usr/local/mysql/bin/ndbd


ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @192.168.0.33  (Version: 4.1.9, Nodegroup: 0)
id=3    @192.168.0.34  (Version: 4.1.9, Nodegroup: 0, Master)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @192.168.0.32  (Version: 4.1.9)

[mysqld(API)]   4 node(s)
id=4   (Version: 4.1.9)
id=5   (Version: 4.1.9)
id=6    @192.168.0.35  (Version: 4.1.9)
id=7 (not connected, accepting connect from any host)

Miscellaneous
  • Remember that in order for SQL data to be stored (replicated) on the cluster, database tables must be created specifyingengine=NDBCLUSTER (as shown the in the examples above). It is possible to use this mechanism to specify different storage engines for different tables within the same database, depending on individual performance and reliability requirements. Non-critical database tables need not be stored on the cluster.
  • It is possible to make NDBCLUSTER the default storage engine by adding a line to the /etc/my.cnf configuration file:
[mysqld]
default-table-type=NDBCLUSTER
  • Occasionally, after abnormal cluster node termination (for example, a system crash) we see "hung" connections, and upon restart the failed node is unable to join the cluster. In this case, the session should be manually cleared on the management console using the command, "purge stale sessions":
ndb_mgm> purge stale sessions
Purged sessions with node id's: 3
ndb_mgm> 




Please direct questions, comments, and suggestions regarding this document to consult@lod.com


---------------------------------------------------------------------------------------------------------------

EC2, MySQL Cluster, and You!

The past week I’ve been pounding my head bloody going round and round with setting up a MySQL Cluster in EC2. First trying it with Ubuntu, then Fedora 6, and then finally I learned to trust the fine folks at Canonical and believe in that their distro was tight and damn is it ever tight. The beauty of using Ubuntu is that everything you need is installed by default and there is no mucking trying to get the right packages, dependencies, or source. Yes, this is probably not the optimal way of going about this but I need a workable solution and fast and while there are a whole pile of rpms ready to roll the nightmare of getting simple things like perl dependencies satisfied in Fedora were enough to send me screaming out of the cloud.
Anyways, I have a wicked basic cluster running using the following process:
On the Management Node I’m using this config.ini which is sort of cribbed together (/var/lib/mysql-cluster/config.ini)
# Options affecting ndbd processes on all data nodes:
[NDBD DEFAULT]
NoOfReplicas=2    # Number of replicas
DataMemory=256M    # How much memory to allocate for data storage
IndexMemory=256M   # How much memory to allocate for index storage
                  # For DataMemory and IndexMemory, we have used the
                  # default values. Since the "world" database takes up
                  # only about 500KB, this should be more than enough for
                  # this example Cluster setup.

# TCP/IP options:
[TCP DEFAULT]
portnumber=2202   # This the default; however, you can use any
                  # port that is free for all the hosts in cluster
                  # Note: It is recommended beginning with MySQL 5.0 that
                  # you do not specify the portnumber at all and simply allow
                  # the default value to be used instead

# Management process options:
[NDB_MGMD]
hostname=mgmn           # Hostname or IP address of MGM node
datadir=/var/lib/mysql-cluster  # Directory for MGM node log files

# Options for data node "A":
[NDBD]
                                # (one [NDBD] section per data node)
hostname=ndbda           # Hostname or IP address
datadir=/mnt/mysql/data   # Directory for this data node's data files

# Options for data node "B":
[NDBD]
hostname=ndbdb           # Hostname or IP address
datadir=/mnt/mysql/data   # Directory for this data node's data files

# SQL node options:
[MYSQLD]
hostname=sqln           # Hostname or IP address
                                # (additional mysqld connections can be
                                # specified for this node for various
                                # purposes such as running ndb_restore)
Now, in the end I moved this into /mnt (cp -ar /var/lib/mysql-cluster) so that I didn’t have the threat of running out of disk space on the primary partition.
On the SQL Node in mysql.cnf (/etc/mysql/my.cnf) I have nothing more than this:
# Options for mysqld process:
[MYSQLD]
ndbcluster                      # run NDB storage engine
ndb-connectstring=mgmn  # location of management server
log=/var/lib/mysql/mysql.log
I am experimenting with adding settings back in but I’m not too sure if they belong in the config.ini on the management node or in here. My gut tell me management node. Anyhow, with this I copied the contents of /var/lib/mysql into /mnt (cp -ar again) and renamed the old directory and created a symbolic link pointing to the new location. Kludgey, yes, but I am still learning my way around MySQL and its various settings. Likely, I will figure which config file gets the data directory settings and I’ll make the appropriate changes. And yes, you read that right I do have logging turned on because I am the kind of guy who needs to know.
On the Data Node in my.cnf (/etc/mysql/my.cnf) this plain vanilla setup:
# Options for ndbd process:
[MYSQL_CLUSTER]
ndb-connectstring=mgmn  # location of management server
Now to tie all this boxes together I ended up using a host file, recommended by Paul Moen and my boss and with an endorsement like that I just had to run with it! On all of the nodes in /etc/hosts I dropped the internal IP addresses of each box in the cloud (nslookup domU-12-34-56-78-9A-B1.z-2.compute-1.internal):
# Mysql Cluster data node
10.1.2.3 ndbda
10.4.5.6 ndbdb
# Mysql Cluster mgm node
10.7.8.9 mgmn
# MySQL Cluster sql node
10.10.11.12 sqln
Starting everything up begins with the management cluster:
ndb_mgmd -f /mnt/mysql-cluster/config.ini
Then the data nodes:
ndbd --initial
Note, you only need to do the inital part if it is the first time the node is coming up if you are restarting a cluster you can skip it.
Lastly, the SQL node:
/etc/init.d/mysql start
On the management node you can issue a SHOW to figure out if your bacon is frying:
root@mgmn:~# ndb_mgm -e show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @10.1.2.3  (Version: 5.0.38, Nodegroup: 0, Master)
id=3    @10.4.5.6  (Version: 5.0.38, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @10.7.8.9  (Version: 5.0.38)

[mysqld(API)]   1 node(s)
id=4    @10.10.11.12  (Version: 5.0.38)
Now, what about backups? Well, I am in the process of experimenting with issuing ndb_mgm -e “START BACKUP” on the cluster manager and that will dump a backup to each of the data nodes. Ideally, I would like to issue periodic backups to each individual node in a staggered fashion and have those gziped and sent up to S3. What I need to figure out is if I can issue a backup command for individual nodes like START BACKUP Node_2 or something there abouts. If that is the case I could then grow the data nodes out to the maximum four and take snapshots every 15 minutes which could give us decent coverage if our whole section of the cloud decided to pop.
If you have any questions, criticisms, or gripes feel free to slap me with them as I feel like I am still missing a huge chunk of the picture with all of this. :-D
Tags: