Config Generator In Hyperledger Fabric

Every Ledger starts with a transaction kept inside the block but what will be the first block? Well, the answer is genesis block. Now, another question how to generate this genesis block? For this part we can use the configtxgen tool to generate the initial or genesis block.
The tool is primarily focused on generating the genesis block for bootstrapping the orderer, but it is intended to be enhanced in the future for generating new channel configurations as well as reconfiguring existing channels.

This tool takes the parameter in form of the configtx.yaml file. You can find the sample configtx.yaml file under the directory fabric/sampleconfig/configtx.yaml on github.

Lets explore the configtx.yaml file. This file mainly contains the following sections:-

  • The Profiles section.
  • The Organizations section.
  • The default sections.

The Profile Section :- Profiles can make a good starting point for construction a real deployment profile.Profiles may explicitly declare all configuration, but usually inherit configuration from the defaults Section.

Sample Profile Configuration:-

The Organizations Section :- This section includes a single reference to the MSP definition. Each element in the Organizations section should be tagged with an anchor label such as & orgName which will allow the definition to be referenced in the Profiles sections.

Sample Organizations Configuration:-

The Default Section :- There are default sections for Orderer and Application configuration, these include attributes like BatchTimeout and are generally used as the base inherited values for the profiles.

Sample Default Configuration:-

Generating the genesis block for the orderer We can use the configtxgen tool to generate the genesis block for the orderer.

configtxgen -profile <profile_name> -outputBlock orderer_genesisblock.pb

An orderer_genesisblock.pb file is generated to the current directory.This genesis block is used to bootstrap the ordering system channel, which the orderers use to authorize and orchestrate creation of other channels. By default, the channel ID encoded into the genesis block by configtxgen will be testchainid. It is recommended that you modify this identifier to something which will be globally unique.

Creating the Channel For creating the channel we need the following things:-

  • Profile Name
  • Channel Name
  • Tx FileName

configtxgen tool is needed for the creating of channel. We can use the following command to generate the channel.

configtxgen -profile <profile_name> -channelID <channel_name> -outputCreateChannelTx <tx_filename>

This will output a marshaled Envelope message which may be sent to broadcast to create a channel.

We can also review the configuration using the following fashion :-

  • Inspecting the Block
  • Inspecting the Channel

Inspecting the Block:-

An Inspect flag is available for inspecting the Block status. -inspectBlock used with the configtxgen tool.

configtxgen -inspectBlock <Block_Name>

The output will the JSON that contains all the relevant information required for the Inspection of the block.

Inspecting the Channel:-

An Inspect flag is available for inspecting the Channel status. -inspectChannelCreateTx used with the configtxgen tool.

configtxgen -inspectChannelCreateTx <Channel_Id>

The output will be the JSON that contains the information about the Channel.

You may even wish to combine the inspection with generation. For example:-

configtxgen -channelID foo -outputBlock foo_genesisblock.pb -inspectBlock foo_genesisblock.pb

Thats all for the Config Generator for the Transaction. Stay Tuned & Happy Coding 🙂

References:- HyperLedger Official Documentation

Advertisements

Crypto Generator in HyperLedger Fabric

Security is the one of the major aspect of any network. Each of the node must have some identity and on the basis of this identity corresponding accesses are granted.

The same approach is follow by the Fabric network as well. Fabric CA generates the identity or artifacts file for each of the node that can be the part of the cluster but for generating these artifacts files we also need to specify some property . Well these properties can be specified in crypto-config.yaml file.

Crypto Configuration file:-
crypto-config file contains the following information:-

OrdererOrgs – Definition of organizations managing orderer nodes.
PeerOrgs – Definition of organizations managing peer nodes.

OrdererOrgs:-
OrdererOrgs contains the following Information about the Ordered Node in the Cluster.

Name:- Name of the Orderer
Domain:- Domain URL for Orderer
Hostname:- Host name for the Orderer. this came under the Specs section.

Sample Ordered Configuration:-


OrdererOrgs:
 # ---------------------------------------------------------------------------
 # Orderer
 # ---------------------------------------------------------------------------
 - Name: Orderer
 Domain: example.com
 # ---------------------------------------------------------------------------
 # "Specs" - See PeerOrgs below for complete description
 # ---------------------------------------------------------------------------
 Specs:
 - Hostname: orderer

PeerOrgs:-
PeerOrgs contains the following Information about the Peer Node in the Cluster.
Name:- Name of the Organization
Domain:- Domain URL for Organization
Hostname:- Host name for the Peer. this came under the Specs section. This is mandatory.
CommandName:- Use to override the common name.
Template Count:- Number of Peer nodes for an organization.
Users Count:- Number of users for an organization.

Sample Peer Configuration:-


- Name: Org2
  Domain: org2.example.com
  Template:
       Count: 2
  Users:
       Count: 1

Note:- By Default name peer name is “{{.Hostname}}.{{.Domain}}” format. If we don’t want to specify the count for users then set the value as zero in that case fabric ca server will dynamically generate the artifacts and other necessary file.
Sample Ordered Configuration:-

After creating the crypto-config.yaml file as per the requirement we can generate the artifacts and other necessary file for creating and maintaining the cluster.
Generating the crypto artifact files :-

We will use the cryptogen tool to generate the artifacts. lets use the tool place inside the bin directory of fabric-sample and feed the crypto-config.yaml to the same.

../bin/cryptogen generate –config=./crypto-config.yaml

The output of the this command is the names of all the organizations.You will notice that a crypto-config directory is created that contains the all the required artifacts.This directory mainly contains the two sub directory i.e. ordererOrganizations and peerOrganizations that contains the artifacts for both ordered and the peers node.

For more information you can refer to Crypto Generator Documentation. In the next blog we will discuss Configuration Transaction Generator.

Till Then Stay Tuned!! 🙂

 

 

Hyperledger Fabric Certificate Authority(CA)

Deep Chains

Every operation in Hyperledger must be signed cryptographically with certificates. You can generate certificates yourself using OpenSSL or by using third party. Before moving further into details of CA lets first explore Hyperledger Fabric a little. 😉

Hyperledger Fabric

Hyperledger Fabric founded in 2015 which is an umbrella for open source projects some of which are Blockchain Distributed Ledger Frameworks such as Fabric, Sawtooth and Iroha. Hyperledger Fabric is a permissioned blockchain, means that parties that join the network are authenticated to participate on network. It reduces security risks and display records to only to the parties involved. It provides:

  • Data Privacy
  • Information Sharing
  • Immutability

That was a concise description about Hyperledger Fabric. Now, lets explore importance of Hyperledger Fabric CA.

Fabric Certificate Authority (CA)

Fabric CA is a tool through which you can generate certificates. Let say you have 10 users then, 10 certificates get generated…

View original post 515 more words

Assimilation of Spark Streaming With Kafka

Knoldus

As we know Spark is used at a wide range of organizations to process large datasets. It seems like spark becoming main stream. In this blog we will talk about Integration of Kafka with Spark Streaming. So, lets get started.

How Kafka can be integrated with Spark?

Kafka provides a messaging and integration platform for Spark streaming. Kafka act as the central hub for real-time streams of data and are processed using complex algorithms in Spark Streaming. Once the data is processed, Spark Streaming could be used to publish results into yet another Kafka topic.

Let’s see how to configure Spark Streaming to receive data from Kafka by creating a SBT project first and add the following dependencies in build.sbt.

val sparkCore = "org.apache.spark" % "spark-core_2.11" % "2.2.0"
val sparkSqlKafka = "org.apache.spark" % "spark-sql-kafka-0-10_2.11" % "2.2.0"
val sparkSql = "org.apache.spark" % "spark-sql_2.11" % "2.2.0"

libraryDependencies ++= Seq(sparkCore, sparkSql, sparkSqlKafka)

View original post 171 more words

Scala Coding Style Guide:- InShort

We all are using the Scala for a very long time now and sometimes we miss some guidelines for writing the Scala code, well this blog guide you through some common Scala coding styles. Lets get started.

  • Indentation:- Scala follows 2 space indentation instead of 4 spaces, well I guess there will be no fight over Tab and 4 Spaces.
    //WRONG                                //RIGHT              
    class Foo {                            class Foo {          
        def bar = ...                        def bar = ...      
    }                                      }                    
  • Line Wrapping:- There are times when a single expression reaches a length where it becomes unreadable to keep it confined to a single line. Scala coding style prefers if length of a line crosses 80 characters then, split the same in multiple lines i.e.
    val result = 1 + 2 + 3 + 4 + 5 + 6 +
      7 + 8 + 9 + 10 + 11 + 12 + 13 + 14 +
      15 + 16 + 17 + 18 + 19 + 20
  • Methods with Numerous Arguments:- If a function has long or complex parameter lists, follow these rules:
    Put the first parameter on the same line as the function name.
    Put the rest of the parameters each on a new line, aligned with the first parameter.
    If the function has multiple parameter lists, align the opening parenthesis with the previous one and align parameters the same as #2.  i.e.

    foo(                           val myLongFieldName =
      someVeryLongFieldName,         foo(
      andAnotherVeryLongFieldName,   someVeryLongFieldName,
      "this is a string",            "this is a string",
      3.1415)                        3.1415)
  • Naming Conventions:- Scala uses “camel case” naming. That is, each word is capitalized, except possibly the first word. Scala prefer to not to use the Underscore in names because Scala has different definition of underscore.

    1. Classes/Traits:- Classes should be named in upper camel case.
      class MyFairLady
    2. Objects:- Object names are like class names (upper camel case).
    3. Packages:- Similar to what Java offers i.e.
      // right! puts only coolness._ in scope
      package com.novell.coolness
      // right! puts both novell._ and coolness._ in scope
      package com.novell
      package coolness
    4. Methods:- Textual (alphabetic) names for methods should be in lower camel case. For getters method name should be as same as the field name and for setter method name should be field name followed by the underscore.If the variable is of type boolean then it can be appended before the field name to create method i.e.
      def myFairMethod = ...
      class Foo {
        def bar = ...
        def bar_=(bar: Bar) {
        def isBaz = ...
      }
    5. Constants, Values, Variable and Methods:- Constant names should be in upper camel case but for variables and methods lower camel case is followed i.e.
      val myValue = ...
      val Pi = 3.14
      def myMethod = ...
      var myVariable = ...
    6. Parentheses:- The opening and closing parentheses should be unspaced and generally kept on the same lines as their content (Lisp-style):
      (this + is a very ++ long *
        expression)
    7. Curly Braces:- Opening curly braces ({) must be on the same line as the declaration they represent:
      def foo = {
        ...
      }
    8. Higher-Order Functions:- Scala Coding preferred style for higher order functions is the exact inverse i.e.
      //Declaration
      def foldLeft[A, B](ls: List[A])(init: B)(f: (B, A) => B): B = ...
      //Calling
      foldLeft(List(1, 2, 3, 4))(0)(_ + _)
      

This blog is intended to summarize some basic Scala style guidelines which should be followed to write more readable code.We have tried to state explanation why a particular style is encouraged.  In further blogs we will be discussing about more coding styles in depth till then Happy Coding.

References:  Scala docs Style

Getting Started With Phantom

Knoldus

phantom

Phantom is Reactive type-safe Scala driver for Apache Cassandra/Datastax Enterprise. So, first lets explore what Apache Cassandra is with some basic introduction to it.

Apache Cassandra

Apache Cassandra is a free, open source data storage system that was created at Facebook in 2008. It is highly scalable database designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure. It is a type of NoSQL database which is Schema-free. For more about Cassandra refer to this blogGetting Started With Cassandra.

Phantom-DSL

We wanted to integrate Cassandra into Scala ecosystem that’s why we used Phantom-DSL as one of the module of outworkers. So, if you are planning on using Cassandra with Scala, phantom is the weapon of choice because of :-

  • Ease of use and quality coding.
  • Reducing code and boilerplate by at least 90%.
  • Automated schema generation

View original post 330 more words

Introduction to Perceptrons: Neural Networks

What is Perceptron?

In machine learning, the perceptron is an algorithm for supervised learning of binary classifiers. It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector.
Linear classifier defined that the training data should be classified into corresponding categories i.e. if we are applying classification for the 2 categories then all the training data must be lie in these two categories.
Binary classifier defines that there should be only 2 categories for classification.
Hence, The basic Perceptron algorithm is used for binary classification and all the training example should lie in these categories. The basic unit in the Neuron is called the Perceptron.

Origin of Perceptron:-

The perceptron algorithm was invented in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt, funded by the United States Office of Naval Research. The perceptron was intended to be a machine, rather than a program, and while its first implementation was in software for the IBM 704, it was subsequently implemented in custom-built hardware as the “Mark 1 perceptron“. This machine was designed for image recognition: it had an array of 400 photocells, randomly connected to the “neurons“. Weights were encoded in potentiometers, and weight updates during learning were performed by electric motors.

perceptron

Component of Perceptron:- Following are the major components of a Perceptron

    • Input:- All the feature becomes the input for a perceptron. We denote the input of a perceptron by [x1, x2, x3, ..,xn], here x represent the feature value and n represent the total number of features. We also have special kind of input called the BIAS. In the image, we have described the value of bias as w0.
    • Weights:- Weights are the values that are computed over the time of training the model. Initial we start the value of weights with some initial value and these values get updated for each training error. We represent the weights for perceptron by [w1,w2,w3,.. wn].
    • BIAS:- A bias neuron allows a classifier to shift the decision boundary left or right. In an algebraic term, the bias neuron allows a classifier to translate its decision boundary. To translation is to “move every point a constant distance in a specified direction”.BIAS helps to training the model faster and with better quality.
    • Weighted Summation:- Weighted Summation is the sum of value that we get after the multiplication of each weight [wn] associated the each feature value[xn]. We represent the weighted Summation by ∑wixi for all i -> [1 to n]
    • Step/Activation Function:- the role of activation functions is make neural networks non-linear.For linerarly classification of example, it becomes necessary to make the perceptron as linear as possible.
    • Output:- The weighted Summation is passed to the step/activation function and whatever value we get after computation is our predicted output.

Inside The Perceptron:-

Perceptron

Description:-

  • Fistly the features for an examples given as input to the Perceptron,
  • These input features get multiplied by corresponding weights [starts with initial value].
  • Summation is computed for value we get after multiplication of each feature with corresponding weight.
  • Value of summation is added to bias.then,
  • Step/Activation function is applied to the new value.

Refrences:- Perceptron The most basic form of Neual Network