Friday, January 10, 2020

AWS KMS Key Encrypt | Decrypt | Key Rotation

AWS KMS encrypts your data with encryption keys that you manage. It is also integrated with AWS CloudTrail to provide encryption key usage logs to help meet your auditing, regulatory and compliance needs.
Fig : AWS KMS Encryption & Decryption
You can also follow this article in Youtube
  1. Create Customer Master Key(CMK)

    Lets create a new Customer Master Key that will be used to encrypt data.
    aws kms create-key
    If no key policy is set, a special default policy is applied. This behaviour is different from creating a key in GUI.
    output
    {
        "KeyMetadata": {
            "AWSAccountId": "123411223344",
            "KeyId":"6fa6043b-2fd4-433b-83a5-3f4193d7d1a6",
            "Arn":"arn:aws:kms:us-east-1:123411223344:key/6fa6043b-2fd4-433b-83a5-3f4193d7d1a6",
            "CreationDate": 1547913852.892,
            "Enabled": true,
            "Description": "",
            "KeyUsage": "ENCRYPT_DECRYPT",
            "KeyState": "Enabled",
            "Origin": "AWS_KMS",
            "KeyManager": "CUSTOMER"
        }
    }
    Note the KeyId from the above.

    Create an Key Alias

    An alias is an optional display name for a CMK. To simplify code that runs in multiple regions, you can use the same alias name but point it to a different CMK in each region.
    aws kms create-alias \
        --alias-name "alias/kms-demo" \
        --target-key-id "6fa6043b-2fd4-433b-83a5-3f4193d7d1a6"
  2. Encrypt Data with CMK

    Lets encrypt a local file confidential_data.txt
    aws kms encrypt --key-id "alias/kms-demo" \
        --plaintext fileb://confidential_data.txt \
        --output text \
        --query CiphertextBlob
    If you want the base64 encoded data to be saved to a file
    aws kms encrypt --key-id "alias/kms-demo" \
        --plaintext fileb://confidential_data.txt \
        --output text \
        --query CiphertextBlob | base64 --decode > encrypted_test_file

    Encrypted upload to S3

    If you wanted to upload files to S3 with the newly created key,
    aws s3 cp confidential_data.txt \
        s3://kms-key-rotation-test-bkt-01 \
        --sse aws:kms \
        --sse-kms-key-id "6fa6043b-2fd4-433b-83a5-3f4193d7d1a6"
  3. Decrypt Data with CMK

    Since the encrypted data includes keyid, we dont have to mention the key-id when decrypting the data.
    aws kms decrypt --ciphertext-blob \
        fileb://encrypted_test_file \
        --output text \
        --query Plaintext
    But the decrypted file is in base64 encoded. If we have to decode and save to file,
    aws kms decrypt \
        --ciphertext-blob fileb://encrypted_test_file \
        --output text \
        --query Plaintext | base64 --decode > decrypted_confidential_data.txt
  4. Rotate Customer Master Key( CMK )

    There are two ways of rotating your CMK,
    • Method 1 : Enable Auto-Rotation in KMS, rotates every 365 days
    • Method 2 : Manually rotate your CMK. You control the period

    Enable Automatic Key Rotation

    Get the current status of key rotation
    aws kms get-key-rotation-status --key-id "alias/kms-demo"
    If you get a output as below, false, that means it is not enabled. Lets enable it,
    aws kms enable-key-rotation --key-id "alias/kms-demo"
    Check the status again,
    aws kms get-key-rotation-status --key-id "alias/kms-demo"

    Manual Key Rotation

    Here you basically create a new CMK, Create CMK and use the alias to point to the new CMK KeyId 
    # List current alias,
    aws kms list-aliases --key-id "alias/kms-demo"
    
    # If no alias, set one.
    aws kms create-alias --alias-name alias/my-shiny-encryption-key --target-key-id "alias/kms-demo"
    
    # Point the alias to new CMK KeyID
    aws kms update-alias --alias-name alias/my-shiny-encryption-key --target-key-id 0987dcba-09fe-87dc-65ba-ab0987654321
    Note: When you begin using the new CMK, be sure to keep the original CMK enabled so that AWS KMS can decrypt data that the original CMK encrypted. When decrypting data, KMS identifies the CMK that was used to encrypt the data, and it uses the same CMK to decrypt the data. As long as you keep both the original and new CMKs enabled, AWS KMS can decrypt any data that was encrypted by either CMK.
  5. Disabling KMS Keys

    Now, Lets say you have been using the keys for sometime and you dont want to use the keys, you can disable the CMK before deletion.
    aws kms disable-key --key-id "6fa6043b-2fd4-433b-83a5-3f4193d7d1a6"
    If you try to download the file again and you will run into an error (dKMS.DisabledException).
  6. Deleting KMS Keys

    You can delete unused or older keys to avoid future costs.
    aws kms schedule-key-deletion --key-id "6fa6043b-2fd4-433b-83a5-3f4193d7d1a6"
    Note: You will never be able to retrieve the file from S3 once you delete the CMK!

Thursday, October 31, 2019

Cloud migration steps

 Cloud infrastructure offers many benefits, compared to the on-premises infrastructure.
Here are they in short:
  • cost reduction
  • security
  • scalability
  • mobility
  • disaster recovery
  • control
  • conpetitive edge

1. Plan and prepare for migration

Before you start the actual migration to the cloud, you have to prepare for it. The level of preparation details depends on your business, but there are some basic steps you should take.
First, you should be clear about the reasons why you’re moving to the cloud. The cloud offers many benefits, but you must be sure what exact benefits your organizations will get by moving your applications to the cloud.
It may be a good idea to assign a manager to plan and oversee the entire migration process. During a large migration project, organizations have to make many technical plans and decisions, and having a specialist is critical to the success of the project.
When you move an application from an on-premise data center to the cloud, there are two ways you can migrate your application—a shallow cloud integration or a deep cloud integration.
For a shallow cloud integration (sometimes called “lift-and-shift”), you move the on-premise application to the cloud, and make no—or limited—changes to the servers in the cloud for the purpose of running the application. Any application changes are just enough to get it to run in the new environment. You don’t use cloud-unique services. This model is also known as lift-and-shift because the application is lifted “as is” and moved, or shifted, to the cloud intact.
For a deep cloud integration, you modify your application during the migration process to take advantage of key cloud capabilities. This might be something simple like using auto scaling and dynamic load balancing, or it might be as sophisticated as utilizing serverless computing capabilities for portions of the application.

2. Choose your cloud environment

Before you start your cloud migration, you have to decide what kind of cloud model will you adopt. First you must choose whether you want to go single or multi-cloud.
A single cloud environment is accomplished by using a single cloud provider to serve any and all applications or services that the organization decides to migrate to the cloud. Single cloud environments can utilize either private or public clouds, using whichever one better serves their current and future needs.
They enable organizations to move workloads to the cloud as their needs grow, with the option to expand the number of virtualized servers if their need grows beyond a single cloud server’s limits. Often, organizations with a single cloud model are employing the cloud for a single service or application, such as email, enterprise resource planning (ERP), customer relationship management (CRM), or similar.
In a multi-cloud environment, an organization uses multiple different public cloud services, often from multiple different providers. The different clouds may be used for various tasks to achieve best-of-breed results or to reduce vendor lock-in. This reflects the growing acknowledgement that not all clouds are created equal -- Marketing and Sales, for instance, likely have different needs than Software Development or R&D, and different cloud solutions can meet those requirements more effectively.
Multiple clouds also give organizations added peace of mind by minimizing dependence on any one provider, often decreasing costs and increasing flexibility.
Based on a service that the cloud is offering, we are speaking of either:
  • IaaS (Infrastructure-as-a-Service)
  • PaaS (Platform-as-a-Service)
  • SaaS (Software-as-a-Service)
  • or, Storage, Database, Information, Process, Application, Integration, Security, Management, Testing-as-a-service

3. Migrate applications and data & review

If you have planned your migration carefully, the actual migration process should go smoothly and quickly.
Depending on the size of your databases and applications, you will use different techniques for actually copying everything over. If you don’t have too much to migrate, you can just copy the data over your internet connection. This approach isn’t ideal for larger workloads. You might have very long transfer times or charges from the cloud provider. To deal with this, you could compress the data before sending it. Alternatively, you could ship your physical drives to the provider to reduce bandwidth costs.
It’s important to take of security during the migration. Any temporary storages for your data should be as secure as the end destination.
Cloud providers will most likely give you access to various cloud migration tools. Use them to help you with migration.
Even after you’ve finished migrating everything to the cloud, there are a few more things to consider. Most important is resource optimization. The cloud is optimized for dynamic resource allocation, and when you allocate resources (servers, for example) statically, you’re not taking advantage of the cloud’s strengths. As you move into the cloud, make sure your teams have a plan for distributing resources to your application.

Conclusion

Moving your business applications and data to the cloud can be a great strategic move that gives you a competitive edge by reducing IT costs, enabling application scalability and many other benefits.
The complexity of the cloud migration process depends mostly on the size and complexity of your business operations. In this article, we have covered the basic steps you should have in mind when migrating to the cloud.

Friday, November 2, 2018

Creating A GraphQL Server With Node.js And Express

Creating A GraphQL Server With Node.js And Express

GraphQL is a language that enables you to provide a complete and understandable description of the data in your API. Furthermore it gives clients the power to ask for exactly what they need and nothing more. The project’s website can be found at http://graphql.org/.
There are several advantages of GraphQL.
GraphQL is declarative: Query responses are decided by the client rather than the server. A GraphQL query returns exactly what a client asks for and no more.
GraphQL is compositional: A GraphQL query itself is a hierarchical set of fields. The query is shaped just like the data it returns. It is a natural way for product engineers to describe data requirements.
GraphQL is strongly-typed: A GraphQL query can be ensured to be valid within a GraphQL type system at development time allowing the server to make guarantees about the response. This makes it easier to build high-quality client tools
In this tutorial you’ll learn how to setup a GraphQL server with Node.js and Express. We’ll be using the Express middleware express-graphql in our example. Furthermore you’ll learn how to use GraphQL on the client side to send queries and mutations to the server.
Let’s get started …
Setting Up The Project
To setup a GraphQL Node.js server let’s start with creating a new empty project folder first:
$ mkdir gql-server
Change into that directory and initiate a new package.json file by executing the following NPM command:
$ npm init
Furthermore create a new server.js file in the project directory. That will be the file where the code required to implement the Node.js GraphQL server will be inserted in the next section:
$ touch server.js
Finally make sure that NPM packages graphqlexpress and express-graphql are added to the project:
$ npm install graphql express express-graphql —save
Having installed these packages successfully we’re now ready to implement a first GraphQL server.
Creating A Basic GraphQL Server With Express
Now that the Project setup is ready let’s create the a first server implementation by inserting the following JS code in server.js:
var express = require('express');
var express_graphql = require('express-graphql');
var { buildSchema } = require('graphql');
// GraphQL schema
var schema = buildSchema(`
    type Query {
        message: String
    }
`);
// Root resolver
var root = {
    message: () => 'Hello World!'
};
// Create an express server and a GraphQL endpoint
var app = express();
app.use('/graphql', express_graphql({
    schema: schema,
    rootValue: root,
    graphiql: true
}));
app.listen(4000, () => console.log('Express GraphQL Server Now Running On localhost:4000/graphql'));
At first we’re making sure that expressexpress-graphql and the buildSchema function from the graphql package are imported. Next we’re creating a simple GraphQL schema by using the buildSchema function.
To create the schema we’re calling the function and passing in a string that contains the IDL (GraphQL Interface Definition Language) code which is used to describe the schema. A GraphQL schema is used to describe the complete APIs type system. It includes the complete set of data and defines how a client can access that data. Each time the client makes an API call, the call is validated against the schema. Only if the validation is successful the action is executed. Otherwise an error is returned.
Next a root resolver is created. A resolver contains the mapping of actions to functions. In our example from above the root resolver contains only one action: message. To keep things easy the assigned functions just returns the string Hello World!. Later on you’ll learn how to include multiple actions and assign different resolver functions.
Finally the Express server is created with a GraphQL endpoint: /graphql. To create the GraphQL endpoint first a new express instance is stored in app. Next the app.use method is called and two parameters are provided:
·         First the URL endpoint as string
·         Second the result of the express_graphql function is handed over. A configuration object is passed into the call of express_graphql containing three properties
The three configuration properties which are used for the Express GraphQL middleware are the following:
·         schema: The GraphQL schema which should be attached to the specific endpoint
·         rootValue: The root resolver object
·         graphiql: Must be set to true to enable the GraphiQL tool when accessing the endpoint in the browser. GraphiQL is a graphical interactive in-browser GraphQL IDE. By using this tool you can directly write your queries in the browser and try out the endpoint.
Finally app.listen is called to start the server process on port 4000.
The Node.js server can be started by executing the following command in the project directory:
$ node server.js
Having started the server process you should be able to see the output
Express GraphQL Server Now Running On localhost:4000/graphql
on the command line. If you access localhost:4000/graphql in the browser you should be able to see the following result:
In the query editor type in the following code:
{
    message
}
Next hit the Execute Query button and you should be able to see the following result:
Implementing A More Sophisticated Example
Now that you have a basic understanding of how to implement a GraphQL server with Node.js and Express, let’s continue with a more sophisticated example. Add a new JS file to the project:
$ touch server2.js
Next let’s add the following implementation:
var express = require('express');
var express_graphql = require('express-graphql');
var { buildSchema } = require('graphql');
// GraphQL schema
var schema = buildSchema(`
    type Query {
        course(id: Int!): Course
        courses(topic: String): [Course]
    },
    type Course {
        id: Int
        title: String
        author: String
        description: String
        topic: String
        url: String
    }
`);
var coursesData = [
    {
        id: 1,
        title: 'The Complete Node.js Developer Course',
        author: 'Andrew Mead, Rob Percival',
        description: 'Learn Node.js by building real-world applications with Node, Express, MongoDB, Mocha, and more!',
        topic: 'Node.js',
        url: 'https://codingthesmartway.com/courses/nodejs/'
    },
    {
        id: 2,
        title: 'Node.js, Express & MongoDB Dev to Deployment',
        author: 'Brad Traversy',
        description: 'Learn by example building & deploying real-world Node.js applications from absolute scratch',
        topic: 'Node.js',
        url: 'https://codingthesmartway.com/courses/nodejs-express-mongodb/'
    },
    {
        id: 3,
        title: 'JavaScript: Understanding The Weird Parts',
        author: 'Anthony Alicea',
        description: 'An advanced JavaScript course for everyone! Scope, closures, prototypes, this, build your own framework, and more.',
        topic: 'JavaScript',
        url: 'https://codingthesmartway.com/courses/understand-javascript/'
    }
]
var getCourse = function(args) {
    var id = args.id;
    return coursesData.filter(course => {
        return course.id == id;
    })[0];
}
var getCourses = function(args) {
    if (args.topic) {
        var topic = args.topic;
        return coursesData.filter(course => course.topic === topic);
    } else {
        return coursesData;
    }
}
var root = {
    course: getCourse,
    courses: getCourses
};
// Create an express server and a GraphQL endpoint
var app = express();
app.use('/graphql', express_graphql({
    schema: schema,
    rootValue: root,
    graphiql: true
}));
app.listen(4000, () => console.log('Express GraphQL Server Now Running On localhost:4000/graphql'));
Ok, let’s examine the code step by step. First we’re defining a schema which now consists of a custom type Course and two query actions.
The Course object type consist of six properties in total. The defined query actions enable the user to retrieve a single course by ID or retrieving an array of Course objects by course topic.
To be able to return data without the need to connect to a database we’re defining the coursesData array with some dummy course data inside.
In the root resolver we’re connecting the course query action to the getCourse function and the courses query action to the getCourses function.
Accessing The GraphQL API
Now let’s start the Node.js server process again and execute the code from file server2.js with the following command:
$ node server2.js
If you’re opening up URL localhost:4000/graphql in the browser you should be able to see the GraphiQL web interface, so that you can start typing in queries. First let’s retrieve one single course from our GraphQL endpoint. Insert the following query code:
query getSingleCourse($courseID: Int!) {
    course(id: $courseID) {
        title
        author
        description
        topic
        url
    }
}
The getSingleCourse query operation is expecting to get one parameter: $courseID of type Int. By usign the exclamation mark we’re specifying that this parameters needs to be provided.
Within the getSingleCourse we’re executing the course query and for this specific ID. We’re specifying that we’d like to retrieve titleauthordescriptiontopic and url of that that specific course.
Because the getSingleCourse query operation uses a dynamic parameter we need to supply the value of this parameter in the Query Variables input field as well:
{
    "courseID":1
}
Click on the execute button and you should be able to see the following result:
Using Aliases & Fragments
You’re able to include multiple queries in one query operation. In the following example the getCourseWithFragments query operations contains two queries for single courses. To distinguish between both queries we’re assigning aliases: course1 and course2.
query getCourseWithFragments($courseID1: Int!, $courseID2: Int!) {
      course1: course(id: $courseID1) {
             ...courseFields
      },
      course2: course(id: $courseID2) {
            ...courseFields
      }
}
fragment courseFields on Course {
  title
  author
  description
  topic
  url
}
As you can see the query operations requires two parameters: courseID1 and courseID2. The first ID is used for the first query and the second ID is used for the second query.
Another feature which is used is a fragment. By using a fragment we’re able to avoid repeating the same set of fields in multiple queries. Instead we’re defining a reusable fragment with name courseFields and specific which fields are relevent for both queries in one place.
Before executing the query operation we need to assign values to the parameters:
{
    "courseID1":1,
    "courseID2":2
}
The result should look like the following:
Creating And Using Mutations
So far we’ve only seen examples which fetches data from our GraphQL server. With GraphQL we’re also able to modify data. by using Mutations. To be able to use a mutation with out GraphQL server we first need to add code to our server implementation in server2.js:
// GraphQL schema
var schema = buildSchema(`
    type Query {
        course(id: Int!): Course
        courses(topic: String): [Course]
    },
    type Mutation {
        updateCourseTopic(id: Int!, topic: String!): Course
    }
    type Course {
        id: Int
        title: String
        author: String
        description: String
        topic: String
        url: String
    }
`);
Here you can see the schema is now containing a Mutation type as well. The mutation which is defined is named updateCourseTopic and takes two mandatory parameter: id and topic. The return type of that mutation is Course.
Using that mutation it should be possible to change the topic of a specific course. In the same way like we did it before for queries we’re now assigning a function to the mutation in the root resolver. The function is implemented with the corresponding update logic:
var updateCourseTopic = function({id, topic}) {
    coursesData.map(course => {
        if (course.id === id) {
            course.topic = topic;
            return course;
        }
    });
    return coursesData.filter(course => course.id === id) [0];
}
var root = {
    course: getCourse,
    courses: getCourses,
    updateCourseTopic: updateCourseTopic
};
Now the sever is able to handle mutations as well, so let’s try it out in the GraphiQL browser interface again.
A mutation operation is defined by using the mutation keyword followed by the name of the mutation operation. In the following example the updateCourseTopic mutation is included in the operation and again we’re making use of the courseFields fragment.
mutation updateCourseTopic($id: Int!, $topic: String!) {
  updateCourseTopic(id: $id, topic: $topic) {
    ... courseFields
  }
}
The mutation operation is using two dynamic variables so we need to assign the values in the query variables input field as follows:
{
  "id": 1,
  "topic": "JavaScript"
}
By executing this mutation we’re changing the value of the topic property for the course data set with ID 1 from Node.js to JavaScript. As a result we’re getting back the changed course:
Conclusion
GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more.
In this tutorial you’ve learned how to implement your own GraphQL server with Node.js and Express. By using the Express middleware express-graphqlsetting up a GraphQL server is really easy and is requiring only few lines of code. If you’d like to to dive


Saturday, September 15, 2018

Apache Kafka

Introduction to Kafka using NodeJs


Building a B2B healthcare product from scratch for the U.S market
This is a small article intended for node.js developers who intend to start implementing distributed messaging system using Kakfa.

I am planning to write a series of articles demonstrating the usage of Kafka and Storm. This article is the first of the same series. So let's begin.

1.1 What is Kafka ?

Kafka is a distributed messaging system providing fast, highly scalable and redundant messaging through a pub-sub model. Kafka’s distributed design gives it several advantages. First, Kafka allows a large number of permanent or ad-hoc consumers. Second, Kafka is highly available and resilient to node failures and supports automatic recovery. In real world data systems, these characteristics make Kafka an ideal fit for communication and integration between components of large scale data systems.

The Kafka Documentation has done an excellent job in explaining the entire architecture.

Before Moving ahead i would suggest the reader to go through the following link. It is very important to understand the architecture.

https://kafka.apache.org/intro

1.2 Installing & Running Zookeeper and Kafka

Kafka can be downloaded from the following link. I am using the current stable release i.e. 0.10.1.1.

https://kafka.apache.org/downloads

Download the tar. Un-tar it and then follow the steps below:

Kafka uses ZooKeeper so you need to first start a ZooKeeper server if you don't already have one. Run the following command to start ZooKeeper:

$ bin/zookeeper-server-start.sh config/zookeeper.properties

Now to start kafka run the following command:

$ bin/kafka-server-start.sh config/server.properties

1.3 Creating Kafka Topic and playing with it

Let's create one topic and play with it. Below is the command to create a topic

$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic Posts

Once you create the topic, you can see the available topics with below command:

$bin/kafka-topics.sh --list --zookeeper localhost:2181

For testing kafka, we can use the kafka-console-producer to send a message

$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic Posts

We can consume all the messages of the same topic by creating a consumer as below:

$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic Posts --from-beginning


1.3 Integrating Kafka with NodeJS

Let's create a API in NodeJS which will act as a Producer to Kafka. We will be then creating another consumer in NodeJS which will be consuming the topic we created above.

We will be using kafka-node and express module for our producer.

var express = require('express');
var kafka = require('kafka-node');
var app = express();

Let's add the code to handle JSON in our api.

var bodyParser = require('body-parser')
app.use( bodyParser.json() );       // to support JSON-encoded bodies
app.use(bodyParser.urlencoded({     // to support URL-encoded bodies
  extended: true
}));

Now in order to create a kafka producer where you have non-keyed partition, you can simply add the following code

var Producer = kafka.Producer,
    client = new kafka.Client(),
    producer = new Producer(client);
Now let's add some event handler for our producer. These will help us know the state of the producer.

producer.on('ready', function () {
    console.log('Producer is ready');
});

producer.on('error', function (err) {
    console.log('Producer is in error state');
    console.log(err);
})
Now Before going into producing a message to a kafka topic, let us first create a simple route and test our api. Add the below code

app.get('/',function(req,res){
    res.json({greeting:'Kafka Consumer'})
});

app.listen(5001,function(){
    console.log('Kafka producer running at 5001')
});
So, Now the entire code looks like below:

var express = require('express');
var kafka = require('kafka-node');
var app = express();

var bodyParser = require('body-parser')
app.use( bodyParser.json() );       // to support JSON-encoded bodies
app.use(bodyParser.urlencoded({     // to support URL-encoded bodies
  extended: true
}));

var Producer = kafka.Producer,
    client = new kafka.Client(),
    producer = new Producer(client);

producer.on('ready', function () {
    console.log('Producer is ready');
});

producer.on('error', function (err) {
    console.log('Producer is in error state');
    console.log(err);
})


app.get('/',function(req,res){
    res.json({greeting:'Kafka Producer'})
});

app.listen(5001,function(){
    console.log('Kafka producer running at 5001')
})
So let's run the code and test our api in postman.






Now lets create a route which can post some message to the topic.

For the nodejs client, kafka has a producer.send() method which takes two arguments. the first being "payloads" which is an array of ProduceRequest. ProduceRequest is a JSON object like:

{
   topic: 'topicName',
   messages: ['message body'], // multi messages should be a array, single message can be just a string or a KeyedMessage instance
   key: 'theKey', // only needed when using keyed partitioner (optional)
   partition: 0, // default 0 (optional)
   attributes: 2 // default: 0 used for compression (optional)
}
Add the below code to get the topic and the message to be sent .

app.post('/sendMsg',function(req,res){
    var sentMessage = JSON.stringify(req.body.message);
    payloads = [
        { topic: req.body.topic, messages:sentMessage , partition: 0 }
    ];
    producer.send(payloads, function (err, data) {
            res.json(data);
    });
   
})
Now let's run the code and hit our api with a payload. Once the producer pushes the message to the topic, we can see the message get consumed in the default shell consumer we created earlier.

Now Let's create a simple consumer for this in nodejs.

In NodeJS, Kafka consumers can be created using multiple ways. The following is the most simple one out of them all:

Consumer(client, payloads, options)
It takes 3 arguments as above. "client" is the one which keeps a connection with the Kafka server. payloads is an array of FetchRequest, FetchRequest is a JSON object like:

{
   topic: 'topicName',
   offset: 0, //default 0
}
the all possible options for the client are as below:

{
    groupId: 'kafka-node-group',//consumer group id, default `kafka-node-group`
    // Auto commit config
    autoCommit: true,
    autoCommitIntervalMs: 5000,
    // The max wait time is the maximum amount of time in milliseconds to block waiting if insufficient data is available at the time the request is issued, default 100ms
    fetchMaxWaitMs: 100,
    // This is the minimum number of bytes of messages that must be available to give a response, default 1 byte
    fetchMinBytes: 1,
    // The maximum bytes to include in the message set for this partition. This helps bound the size of the response.
    fetchMaxBytes: 1024 * 1024,
    // If set true, consumer will fetch message from the given offset in the payloads
    fromOffset: false,
    // If set to 'buffer', values will be returned as raw buffer objects.
    encoding: 'utf8'
}
So let's add the code below to create a simple consumer.

var kafka = require('kafka-node'),
    Consumer = kafka.Consumer,
    client = new kafka.Client(),
    consumer = new Consumer(client,
        [{ topic: 'Posts', offset: 0}],
        {
            autoCommit: false
        }
    );
Let us add some simple event handlers. One of which notifies us when a message is consumed. For simplicity of the article, let us just do console.log

consumer.on('message', function (message) {
    console.log(message);
});

consumer.on('error', function (err) {
    console.log('Error:',err);
})

consumer.on('offsetOutOfRange', function (err) {
    console.log('offsetOutOfRange:',err);
})
The entire code of the consumer looks like below:

var kafka = require('kafka-node'),
    Consumer = kafka.Consumer,
    client = new kafka.Client(),
    consumer = new Consumer(client,
        [{ topic: 'Posts', offset: 0}],
        {
            autoCommit: false
        }
    );

consumer.on('message', function (message) {
    console.log(message);
});

consumer.on('error', function (err) {
    console.log('Error:',err);
})

consumer.on('offsetOutOfRange', function (err) {
    console.log('offsetOutOfRange:',err);
})
Before testing this consumer, let us first kill the shell consumer. Then hit our producer api


This is the end of this article. But in future articles i am planning to showcase a bit more complicated usage of Kafka.

Hope this article helps!