Config Generator In Hyperledger Fabric

Every Ledger starts with a transaction kept inside the block but what will be the first block? Well, the answer is genesis block. Now, another question how to generate this genesis block? For this part we can use the configtxgen tool to generate the initial or genesis block.
The tool is primarily focused on generating the genesis block for bootstrapping the orderer, but it is intended to be enhanced in the future for generating new channel configurations as well as reconfiguring existing channels.

This tool takes the parameter in form of the configtx.yaml file. You can find the sample configtx.yaml file under the directory fabric/sampleconfig/configtx.yaml on github.

Lets explore the configtx.yaml file. This file mainly contains the following sections:-

  • The Profiles section.
  • The Organizations section.
  • The default sections.

The Profile Section :- Profiles can make a good starting point for construction a real deployment profile.Profiles may explicitly declare all configuration, but usually inherit configuration from the defaults Section.

Sample Profile Configuration:-

The Organizations Section :- This section includes a single reference to the MSP definition. Each element in the Organizations section should be tagged with an anchor label such as & orgName which will allow the definition to be referenced in the Profiles sections.

Sample Organizations Configuration:-

The Default Section :- There are default sections for Orderer and Application configuration, these include attributes like BatchTimeout and are generally used as the base inherited values for the profiles.

Sample Default Configuration:-

Generating the genesis block for the orderer We can use the configtxgen tool to generate the genesis block for the orderer.

configtxgen -profile <profile_name> -outputBlock orderer_genesisblock.pb

An orderer_genesisblock.pb file is generated to the current directory.This genesis block is used to bootstrap the ordering system channel, which the orderers use to authorize and orchestrate creation of other channels. By default, the channel ID encoded into the genesis block by configtxgen will be testchainid. It is recommended that you modify this identifier to something which will be globally unique.

Creating the Channel For creating the channel we need the following things:-

  • Profile Name
  • Channel Name
  • Tx FileName

configtxgen tool is needed for the creating of channel. We can use the following command to generate the channel.

configtxgen -profile <profile_name> -channelID <channel_name> -outputCreateChannelTx <tx_filename>

This will output a marshaled Envelope message which may be sent to broadcast to create a channel.

We can also review the configuration using the following fashion :-

  • Inspecting the Block
  • Inspecting the Channel

Inspecting the Block:-

An Inspect flag is available for inspecting the Block status. -inspectBlock used with the configtxgen tool.

configtxgen -inspectBlock <Block_Name>

The output will the JSON that contains all the relevant information required for the Inspection of the block.

Inspecting the Channel:-

An Inspect flag is available for inspecting the Channel status. -inspectChannelCreateTx used with the configtxgen tool.

configtxgen -inspectChannelCreateTx <Channel_Id>

The output will be the JSON that contains the information about the Channel.

You may even wish to combine the inspection with generation. For example:-

configtxgen -channelID foo -outputBlock foo_genesisblock.pb -inspectBlock foo_genesisblock.pb

Thats all for the Config Generator for the Transaction. Stay Tuned & Happy Coding 🙂

References:- HyperLedger Official Documentation

Advertisements

Crypto Generator in HyperLedger Fabric

Security is the one of the major aspect of any network. Each of the node must have some identity and on the basis of this identity corresponding accesses are granted.

The same approach is follow by the Fabric network as well. Fabric CA generates the identity or artifacts file for each of the node that can be the part of the cluster but for generating these artifacts files we also need to specify some property . Well these properties can be specified in crypto-config.yaml file.

Crypto Configuration file:-
crypto-config file contains the following information:-

OrdererOrgs – Definition of organizations managing orderer nodes.
PeerOrgs – Definition of organizations managing peer nodes.

OrdererOrgs:-
OrdererOrgs contains the following Information about the Ordered Node in the Cluster.

Name:- Name of the Orderer
Domain:- Domain URL for Orderer
Hostname:- Host name for the Orderer. this came under the Specs section.

Sample Ordered Configuration:-


OrdererOrgs:
 # ---------------------------------------------------------------------------
 # Orderer
 # ---------------------------------------------------------------------------
 - Name: Orderer
 Domain: example.com
 # ---------------------------------------------------------------------------
 # "Specs" - See PeerOrgs below for complete description
 # ---------------------------------------------------------------------------
 Specs:
 - Hostname: orderer

PeerOrgs:-
PeerOrgs contains the following Information about the Peer Node in the Cluster.
Name:- Name of the Organization
Domain:- Domain URL for Organization
Hostname:- Host name for the Peer. this came under the Specs section. This is mandatory.
CommandName:- Use to override the common name.
Template Count:- Number of Peer nodes for an organization.
Users Count:- Number of users for an organization.

Sample Peer Configuration:-


- Name: Org2
  Domain: org2.example.com
  Template:
       Count: 2
  Users:
       Count: 1

Note:- By Default name peer name is “{{.Hostname}}.{{.Domain}}” format. If we don’t want to specify the count for users then set the value as zero in that case fabric ca server will dynamically generate the artifacts and other necessary file.
Sample Ordered Configuration:-

After creating the crypto-config.yaml file as per the requirement we can generate the artifacts and other necessary file for creating and maintaining the cluster.
Generating the crypto artifact files :-

We will use the cryptogen tool to generate the artifacts. lets use the tool place inside the bin directory of fabric-sample and feed the crypto-config.yaml to the same.

../bin/cryptogen generate –config=./crypto-config.yaml

The output of the this command is the names of all the organizations.You will notice that a crypto-config directory is created that contains the all the required artifacts.This directory mainly contains the two sub directory i.e. ordererOrganizations and peerOrganizations that contains the artifacts for both ordered and the peers node.

For more information you can refer to Crypto Generator Documentation. In the next blog we will discuss Configuration Transaction Generator.

Till Then Stay Tuned!! 🙂

 

 

Hyperledger Fabric Certificate Authority(CA)

Deep Chains

Every operation in Hyperledger must be signed cryptographically with certificates. You can generate certificates yourself using OpenSSL or by using third party. Before moving further into details of CA lets first explore Hyperledger Fabric a little. 😉

Hyperledger Fabric

Hyperledger Fabric founded in 2015 which is an umbrella for open source projects some of which are Blockchain Distributed Ledger Frameworks such as Fabric, Sawtooth and Iroha. Hyperledger Fabric is a permissioned blockchain, means that parties that join the network are authenticated to participate on network. It reduces security risks and display records to only to the parties involved. It provides:

  • Data Privacy
  • Information Sharing
  • Immutability

That was a concise description about Hyperledger Fabric. Now, lets explore importance of Hyperledger Fabric CA.

Fabric Certificate Authority (CA)

Fabric CA is a tool through which you can generate certificates. Let say you have 10 users then, 10 certificates get generated…

View original post 515 more words

Assimilation of Spark Streaming With Kafka

Knoldus

As we know Spark is used at a wide range of organizations to process large datasets. It seems like spark becoming main stream. In this blog we will talk about Integration of Kafka with Spark Streaming. So, lets get started.

How Kafka can be integrated with Spark?

Kafka provides a messaging and integration platform for Spark streaming. Kafka act as the central hub for real-time streams of data and are processed using complex algorithms in Spark Streaming. Once the data is processed, Spark Streaming could be used to publish results into yet another Kafka topic.

Let’s see how to configure Spark Streaming to receive data from Kafka by creating a SBT project first and add the following dependencies in build.sbt.

val sparkCore = "org.apache.spark" % "spark-core_2.11" % "2.2.0"
val sparkSqlKafka = "org.apache.spark" % "spark-sql-kafka-0-10_2.11" % "2.2.0"
val sparkSql = "org.apache.spark" % "spark-sql_2.11" % "2.2.0"

libraryDependencies ++= Seq(sparkCore, sparkSql, sparkSqlKafka)

View original post 171 more words

Getting Started With Phantom

Knoldus

phantom

Phantom is Reactive type-safe Scala driver for Apache Cassandra/Datastax Enterprise. So, first lets explore what Apache Cassandra is with some basic introduction to it.

Apache Cassandra

Apache Cassandra is a free, open source data storage system that was created at Facebook in 2008. It is highly scalable database designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure. It is a type of NoSQL database which is Schema-free. For more about Cassandra refer to this blogGetting Started With Cassandra.

Phantom-DSL

We wanted to integrate Cassandra into Scala ecosystem that’s why we used Phantom-DSL as one of the module of outworkers. So, if you are planning on using Cassandra with Scala, phantom is the weapon of choice because of :-

  • Ease of use and quality coding.
  • Reducing code and boilerplate by at least 90%.
  • Automated schema generation

View original post 330 more words

Introduction to Perceptrons: Neural Networks

What is Perceptron?

In machine learning, the perceptron is an algorithm for supervised learning of binary classifiers. It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector.
Linear classifier defined that the training data should be classified into corresponding categories i.e. if we are applying classification for the 2 categories then all the training data must be lie in these two categories.
Binary classifier defines that there should be only 2 categories for classification.
Hence, The basic Perceptron algorithm is used for binary classification and all the training example should lie in these categories. The basic unit in the Neuron is called the Perceptron.

Origin of Perceptron:-

The perceptron algorithm was invented in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt, funded by the United States Office of Naval Research. The perceptron was intended to be a machine, rather than a program, and while its first implementation was in software for the IBM 704, it was subsequently implemented in custom-built hardware as the “Mark 1 perceptron“. This machine was designed for image recognition: it had an array of 400 photocells, randomly connected to the “neurons“. Weights were encoded in potentiometers, and weight updates during learning were performed by electric motors.

perceptron

Component of Perceptron:- Following are the major components of a Perceptron

    • Input:- All the feature becomes the input for a perceptron. We denote the input of a perceptron by [x1, x2, x3, ..,xn], here x represent the feature value and n represent the total number of features. We also have special kind of input called the BIAS. In the image, we have described the value of bias as w0.
    • Weights:- Weights are the values that are computed over the time of training the model. Initial we start the value of weights with some initial value and these values get updated for each training error. We represent the weights for perceptron by [w1,w2,w3,.. wn].
    • BIAS:- A bias neuron allows a classifier to shift the decision boundary left or right. In an algebraic term, the bias neuron allows a classifier to translate its decision boundary. To translation is to “move every point a constant distance in a specified direction”.BIAS helps to training the model faster and with better quality.
    • Weighted Summation:- Weighted Summation is the sum of value that we get after the multiplication of each weight [wn] associated the each feature value[xn]. We represent the weighted Summation by ∑wixi for all i -> [1 to n]
    • Step/Activation Function:- the role of activation functions is make neural networks non-linear.For linerarly classification of example, it becomes necessary to make the perceptron as linear as possible.
    • Output:- The weighted Summation is passed to the step/activation function and whatever value we get after computation is our predicted output.

Inside The Perceptron:-

Perceptron

Description:-

  • Fistly the features for an examples given as input to the Perceptron,
  • These input features get multiplied by corresponding weights [starts with initial value].
  • Summation is computed for value we get after multiplication of each feature with corresponding weight.
  • Value of summation is added to bias.then,
  • Step/Activation function is applied to the new value.

Refrences:- Perceptron The most basic form of Neual Network

 

Decision Tree using Apache Spark

What is Decision Tree?

A decision tree is a powerful method for classification and prediction and for facilitating decision making in sequential decision problems. A Decision tree is made up of two components:-
  • A Decision
  • Outcome

and, A Decision tree includes three type of Nodes:-

  • Root node: The top node of the tree comprising all the data.
  • Splitting node: A node that assigns data to a subgroup.
  • Terminal node:  Final decision (outcome).

To reach to an outcome or to get the result, the process starts with the root node, based on decision made on root node the next node i.e. splitter node is selected and based on decision made on split node another child split node is selected and this process goes on we reach to the terminal node and value of terminal node is our outcome.

Decision Tree in Apache Spark

It might sound strange or geeky, that there is no implementation of the Decision tree in Apache Spark, well technically yes because In Apache Spark, you can find implementation of Random Forest algorithm in which  number of trees can be specified by user So, under the hood Apache Spark call the Random forest with one tree.

In Apache Spark, The decision tree is a greedy algorithm that performs a recursive binary partitioning of the feature space. The tree predicts the same label for each bottom most (leaf) partition. Each partition is chosen greedily by selecting the best split from a set of possible splits, in order to maximize the information gain at a tree node.

The node impurity is a measure of the homogeneity of the labels at the node. The current implementation provides two impurity measures for classification (Gini impurity and entropy)

ig

Stopping rule

The recursive tree construction is stopped at a node when one of the following conditions is met:

  1. The node depth is equal to the trainingmaxDepth parameter.
  2. No split candidate leads to an information gain greater than.minInfoGain
  3. No split candidate produces child nodes which each have at least trainingminInstancesPerNode instances.

Useful Parameters

  • algo:- It can be either Classification or Regression.
  • numClasses :- No of classification classes.
  • maxDepth :- Maximum depth of a tree in term of nodes.
  • minInstancesPerNode :- For a node to be split further, each of its children must receive at least this number of training instances
  • minInfoGain :- For a node to be split further, the split must improve at least this much.
  • maxBins :- Number of bins used when discretizing continuous features

Preparing training data for Decision Tree

You can not directly feed any data to the Decision tree. it demands the special type of format to feed to the decision tree. You can use the HashingTF technique to convert the training data to labelled data so that decision tree can understand.This process is also known as Standardization of data.

Feeding and obtaining Result

Once data has been standardized then you can feed the same Decision Tree Algorithm for classification but before than you need to split the data for training and testing purpose i.e. to test the accuracy you need to hold some part of data for testing. You can feed the data like this.

val splits = data.randomSplit(Array(0.7, 0.3))
val (trainingData, testData) = (splits(0), splits(1))

// Train a DecisionTree model.
//  Empty categoricalFeaturesInfo indicates all features are continuous.
val numClasses = 2
val categoricalFeaturesInfo = Map[Int, Int]()
val impurity = "gini"
val maxDepth = 5
val maxBins = 32

val model = DecisionTree.trainClassifier(trainingData, numClasses, categoricalFeaturesInfo,
  impurity, maxDepth, maxBins)

Here, data is my standardized input data, to which i split in ratio 7:3 for training and testing purpose respectively, We are using `gini` impurity with maximum depth as `5`.

Once the model is generated you can try predict the classification for the other data, but before that we need to validate the accuracy of classification for recently generated Model. You can validate or compute the accuracy by computing `Test Error`.

// Evaluate model on test instances and compute test error
val labelAndPreds = testData.map { point =>
  val prediction = model.predict(point.features)
  (point.label, prediction)
}
val testErr = labelAndPreds.filter(r => r._1 != r._2).count().toDouble / testData.count()
println("Test Error = " + testErr)

Less the value of Test Error better the Model is Prepared.You can take a look at running example here.

Refrences:- Apache Spark Documentation