Click here to Skip to main content
14,449,458 members

Industrial IoT based Machine Tool Condition Monitoring With GE Predix Time Series Ingestion And Data Streaming

, ,
Rate this:
5.00 (5 votes)
Please Sign up or sign in to vote.
5.00 (5 votes)
6 Aug 2016CPOL
Industrial IoT time series data collection with GE Predix time series ingestion and data streaming

1. Overview

Tool wear monitoring  is referred to the wear and tear suffered by a machine tool on the course of operation. Every machine tool suffers a fraction of wear depending upon the type of tool, nature of job and so on. With consistent wear and tear the tool becomes unusable at a point of time. As the tool keeps wearing off, the precision of the job starts getting affected. The tool needs a replacement when it can no more be used to produce jobs with certain level of accepted accuracy. In industries, standard procedures are adopted to measure the wear and tear of these tools. Mostly these methods are based on industrial accelerometer sensors and high precision multi channel simultaneous data acquisition units. Tool lifetime Expectancy  is defined as the overall predicted time for which the tool can be used. It is often found out using Taylors equation which is normally dependent upon speed of cut, depth of cut, feed rate and composition of the material. But many a times, the expected lifetime deviate from the calculated expected lifetime due to various production process that includes improper feed, inaccurate material composition that the specified one and so on. As, the expected too lifetime is an important metric for production and maintainance planning, many companies are now adopting standard observations for the tool and are trying to predict the accurate Remaining Useful Life ( RUL).  RUL is defined as the estimated time from current time when the tool will fail( or unusable in production in simple terminology). Often time RUL prediction is not as straightforward as it seems from the description. A great deal of research is being going on in this direction to effectively estimate RUL.

One of the common approaches for this is machine learning based techniques. A machine learning approch is one where known data is provided to a computation engine based on which the engine classifies the state of given input data or can run a predictive analysis about the change of state. Therefore, such techniques are dependent on the features. 

As discussed earlier, one of the common metric for measurement of the quality of tool is to measure the vibration it is producing. The vibration data is further accumulated through total acceleration in all three direction. Such a data that varies with time is also known as time series. RUL is therefore a problem associated with Time Series Prediction. Various techniques including Neural Networks 

In order to create a machine learning framework for RUL, first the tool must be used till Run to Failure (RTF)  under constrained environment and the logged data and the lifetime is observed. This technique has been used conventionally to effectively plan maintainance schedule.

With improved computational and predictive engines, time series data acquired during RTF can be given as input to a machine learning technique which can interpolate the current observation using Neural Networks, Support Vector Machines, Curve fittings or other techniques to find out RUL.

With cloud based articifical intelligence offered by Microsoft Azure, IBM watson, many a companies are looking for high precision accurate RUL by utilizing these cloud service providers. This needs a roboust framework for data acquision, collection, analysis and prediction. Machines are often referred as Assets  in industrial terminology. A company may have several assets functioning in different section of a production. Further Assets may be produced by different manufacturers with different specification. Taylor's equation depends upon constants which are further dependent on the Asset type, manufacturer, specifications and so on. Therefore a RUL need following fundamentals

  •  Asset management system
  • RTF data
  • Data acquisition and collection
  • Data filtering
  • Predictive analysis ( that incorporates metrics from Asset data)

Security and latency of communication becomes added challenges on the top of above stack. Data reliability is another issue that the stack needs to address.

We started working towards RUL analysis a year back after collaborating with a Bangalore based company. Our role was to analyze the data being acquired throug RTF and find a method that can suitably predict RUL.

Needless to say, when we started with the project, entire project was offline. We gathered data and recorded in text files. Then we analyzed the data in Matlab and created threshold based technique applied over various transformed domain to estimate the RUL. Our entire experimentation was based on a single Lathe machine and a single tool carrying out a constrained job. We realized that after sufficient knowledge about the data, writing a RUL analysis for a single Asset-Single Job-Single is not that difficult.

Now consider a large scale company with hundred or may be even thousands of assets, deployed in different units spread of entire geography carrying out several complex jobs. Creating a universial RUL analysis needs RTF for every type of job in each of the assets which is theoritically impossible in a production cycle.

The only way out for this is by using a cloud based machine learning technique, training it with RTF of different assets running planned constrained jobs. Then if real time observation series is given as input to this analysis, it should be able to generate the alert on RUL.

Various players in industrial automation domain have their own platform perfected for particular type of industries over a period of time. Their services contains, their own middleware, UI, gateways. Therefore migrating from one provider to another was never easy by the companies. Because it needed an entire change in protocols, sensors, UIs and companies busy with their production hardly had time for this. So traditionally handful of industrial automation players have dominated the industry for decades. Now that manufacturing and production process faces tremendeous challenges, companies are forced to take a step back and review their automation and analytics process. Cloud based services has opened up the game widely.

 

GE's predix environment has been one of the hottest buzz in industrial IoT ( IIoT- as they say) these days. With GE's years of experience in asset management, maintainance planning and asset monitoring, the industry is putting a huge bet on this PaaS ecosystem. Having jumped into IoT bandwagon quite early and about a year and half into IoT ecosystem, we thought this Codeproject contest might be the right opportunity to explore industrial IoT in greater details. We were extremely excited to see a special "Industrial IoT" section. That was the right motivation at the right time. So, we started our journey with industrial IoT.

 

Disclaimer:  None of us are mechanical engineers, so many definations we provide in this article may not be pin point precise to what you learn from Mechanical text books. We beg your pardon at the beginning itself incase there is a technical fault with the definations or the industrial process we describe. Further, This article is more or less a beginners manual for getting started with GE predix platform, understanding principles of RUL, creating simulation environment with prototyping platform like Intel Edison for IIoT and is not a fully featured end to end- complete stack of universial RUL system. If you are an expert mechanical engineer with years of experience in production and maintainance planning, leave your valuable advises and comments and suggestions. If you are a novice and wants to learn a getting started guide for IIoT, use this article as your startup guide but not without referreing to some more Mechanical Engineering texts.   

 

2. Article Structure

Our plan was definately to build a full stack RUL and Machine Tool Analytics framework with GE predix. But as we started we realized that this is probably the only thing known by us which is more complex than a woman's nature ( Moumita disagrees and feels that nothing beats hear gender's complexity!)

So, unfortunately we could not build the complete system. But we had no idea about the complexity of the architecture. So, we limit this article to time series ingestion which is one of the priliminary steps in cloud based analysis of RUL or machine tool healt/life time.  The objective is to detail out the complete stack, cover as much as possible, point out which we could not do and request the community to help us to connect couple of dots that we failed to acheieve to turn this solution into a complete process. We also cover the process of our data collection. This should hel the developer community to understand the data acquisition prerequistes in the induestry and will help model their architecture to suit such a framework.

 

Here is what the article would be: 

Part A: IIoT with GE Predix

  •  Real time accelerometer data acquisition for tool life time expectancy analysis ( non IoT)
  • A discussion about transforming this data acquisition architecture to IoT
  • Getting started with GE Predix
  • Developing Microservice for sending acquired data in machine to GE Predix in real time
  • A discussion about analyzing this data.

 

having understood how the data acquisition works, we need to focus on predictive analysis. However, this article would not cover the analytical part as that requires another elaborated discussion. 

3.  Real time data acqsuisition process in the industry

This section presents detailed information about the overall description of test facilities and the instruments for vibration data acquisition. A proper data acquisition process can extensively improve the sensibility for the detection of failures. The images and descriptions are from a small scale industry in bangalore, India with whom we collaborated to carry out the experiements( name omitted on the request of the management). We use a lathe and turning operation for our current discussion.

 

3.1 The Lathe Machine

The Lathe used for carrying turning operation is Center Lathe named HEIDENREICH & HARBECK 540. Figure 3.1 shows the lathe machine used for the present work and table gives the specifications of the lathe.

Image 1

                                                  Fig 3.1:Heindenreich & Harbeck 540 lathe

Lathe is a machine tool which rotates the workpiece on its axis to perform various operations such as cutting, sanding, knurling, drilling, or deformation, facing, turning, with tools that are applied to the workpiece to create an object which has symmetry aboutan axis of rotation[Wiki].

                                                    Table 3.1:   Specification of the Lathe          

   Image 2

  3.2 Tooling Details

Carbide tools have been selected for carrying out the machinability study of the Al-SiC metal matrix composites. Carbide tool used in the present work is shown in the Figure 3.2. Taegutec India Pvt ltd were the manufacturers of these inserts used.

Image 3

                      Fig 3.2:Taegu Tec Carbide inserts

Cemented carbide is the preferred material for parts that must withstand all forms of wear (Including sliding abrasion, erosion, corrosion/wear and metal-to-metal galling) and exhibit a high degree of toughness. It exhibits high compressive strength, resists deflection, and retains its hardness values at high temperatures, a physical property especially useful in metal-cutting applications.[General carbide]

It provides long life in applications where other materials would not last or would fail prematurely. The cemented carbide industry commonly refers to this material as simply ―carbide‖, although the terms tungsten carbide
and cemented carbide are used interchangeably.

Carbide inserts was used inserted into the Carbide insert holder and it is as shown in the Figure 3.3. Geometry of the inserts is shown Figure 3.4. The chemical composition of the carbide tool is shown in the table 3.2.
The geometrical specifications given in the Table 3.3.

You can refer to this Wiki article on Tool Bits for more details about the tooling.

Image 4

              Fig 3.3:Tool holder used in present work

Image 5

           Fig 3.4:Geometry of the carbide tool insert

           

Image 6

Table 3.2:Percentage composition of carbide tool

T

Image 7

Table 3.3:Geometrical specification of carbide tool inserts

 

The hardness values and melting point temperatures of the carbide tool inserts are 2800 o C and 55-70RC.

The insert used for present work is a C-type uncoated IC20 grade having 55 o Rhombic shaped, with 7 o back rake angle. Blunting of the sharp cutting edge of the tool indicates the partially worn out and cutting edge is broken in case of fully worn out tool.


3.3 Structure of Data Acquisition System
 

Image 8

Fig 3.5:Information flow of Data Acquisition System

 

Data acquisition system
Tool condition monitoring was implemented through the analysis of vibration data that was captured by the data acquisition system.  Figure 3.6. Shows Integration of the components in data acquisition system.

Image 9

              Fig 3.6 :Integration of the components in data acquisition system

8-Channel Analyzer - 8 channel analyzer helps in both recording and analyzing which is powered by abacus with the help of Data Physics modular software that allows you to select only those measurements you need from an
extensive range covering general FFT analysis, data recording and playback analysis, environment testing, structural analysis, acoustics, machinery diagnostics and production/QC testing.[Data Pysics]

SignalCalc Mobilyzer core functions include powerful FFT measurements, high dynamic range, and extensive analysis tools. Standard functions include full Engineering Units selection and conversion, Automatic Export,
Professional Reports, easy data management with Signal Map, flexible display  of results, cursor functions for detailed analysis, and an unsurpassed intuitive user interface. ( you can read more about SignalCalc mobilizer here)

Optional application modules extend the functionality to include real-time Octave and 1/3 Octave analysis, Sound Power and Intensity, RPM-based measurements, Order Tracking, MIMO and Stepped Sine testing, Balancing,
SRS analysis, Demodulation, Rotor dynamics analysis, multi-plane Balancing, Waterfall and Spectrogram presentations, and Throughput-to-Disk.

8-Channel Data Recorder- TEAC data recorders (es8 recorder) have been designed for fast set-up, reliable recording applications in the field and in the laboratory.[TEAC] Figure 3.7 shows the ES8 recorder. The es8 recorder uses a Compact Flash memory card for recording. A stand-alone operation with the LCD and the
key allows you to set, monitor, and record. By connecting to a PC with a built-in USB interface, you can perform the wave monitor and the recording control. While simultaneously sampling on all channels, using the 16 bits A/D converter individually equipped in each channel, approximately four hours of continuous recording can be done with an internal dry cell battery. An external battery will extend recording time. The sampling frequency ranges are 5kHz to 1/60Hz. Combining with various types of sensor amplifiers, a wide range of measurement[TEAC es8]
is possible such as; earthquake, natural phenomenon and structural vibration, mechanical vibration measurement of the strain measurement, and bio-signal measurement. It acts as a backup for the 8-channel analyzer. They are designed to provide cost-efficient data recording and front-end solutions .

Image 10

                                                                   Fig 3.7:es8 Data Recorder

Kistler amplifier- type 5357B12 is shown in Figure 3.8. The charge calibrator can be used to check and calibrate piezoelectric measuring systems. The charge calibrator is connected to the measuring chain either in place of the sensor or in parallel with it. Up to five charge amplifiers can be connected.[Kistler] Operation is via
keyboard or optional interfaces. The parameters set appear on the LCD. A typical application is the combination of a charge calibrator with 1 to 5 charge amplifiers. 

Image 11

                                          Fig 3.8: Kistler amplifier- 5357B12 type

Kistler Tri Axial Accelerometer Vibration signals are important for monitoring the machine condition in turning operation. Figure 3.9 shows the triaxial accelerometer (8766A500BB). Type 8766A250AA and 8766A5BB
are the triaxial accelerometer designed for high temperature applications. It uses Kistler‘s PiezoStar shear element design which provides ide operating frequency range and extremely low sensitivity to temperature changes. The sensor combines PiezoStar crystals and high gain integral hybrid microelectronics to achieve very low sensitivity variation over the operating temperature range, compared to other sensing element designs. [Quality Digest]
Image 12

 Fig 3.9: Kistler accelerometer (sensor)

 

Image 13

Table 3.4:Specification of Kistler accelerometer (500BB)

A Kistler accelerometer (8766A500BB) was attached to tool holder by tapping the sensorto fit in as show in Figure 3.10.

Image 14

                                                                                   Fig 3.10:Installation of the sensor

3.4 Experimental setup, Test Procedure and Analysis

Experimental setup and Test procedure

From the review of past TCM studies, tri axial accelerometer was employed to detect the multiple direction vibrations. Complete setups of data acquisition system with the conventional lathe are seen in Figure 3.11. Kistler amplifier is set to a vibration sensitivity of 104.3 Hz according the requirements setting up of the amplifier is shown in Figure 3.12

.
The accelerometer was mounted on the shank holding the cutting tool as seen in Figure 3.13. The signals detected by the accelerometer were amplified and recorded. The signals were stored in data- acquisition program (data physics),

carbide insert tools with variable tool-wear (normal carbide tool, medium worn-out tool, fully worn-out tool) amounts were mounted in conventional lathe to cut metal matrix composite (LM6) work pieces with varying in its composition (LM6 i.e. LM6+0%SiC; LM6+4%SiC; LM6+12%SiC) with different speed, feed and same depth of cut.

Image 15

                                                     Fig 3.11 : Experimental setup

Image 16

                                              Fig 3.12:Setting up the sensitivity in amplifier

In the machining test, firstly the speed was set by the help of tachometer figure 3.14 the accelerometer was mounted on the tool holder to collect the vibration data generated in the machining process Figure 3.15 . Vibration signal were generated and amplified by the kistler amplifier and recorded by the 8-channel analyzer. The channels for cutting force data acquisition was programmed in and were recorded using data physics in Figure.3.16 as show.

Image 17

Fig 7.13:Accelerometer mounted on the tool holder

The experimental data showed that vibration amplitudes varied with the change of cutting conditions. It was tested for different conditions of tool wear with different spindle speed, feed that has been tabulated in table 7.1, where : A- New tool , B-partially(medium)worn out tool ,C-fully worn out tool for each samples of LM6+0%SiC ; LM6+4%SiC;
LM6+12%SiC combination respectively 1,2,3. Figure3.17 shows you the 3 condition of carbide tools.

Image 18

                                                          Fig 3.14: Setting the speed

Image 19

Fig 3.15:Machining operation

Image 20

                                   

                                      Fig 3.16 : Example of vibration data collected at tool post

                          

Image 21

Table 3.5 :Experimental arrangement table

 

     

Image 22

                                  Fig 3.17 :Three types of carbide insert

 

3.5 Challenges of Migrating The Monitoring Process to Cloud

Section 3.4 gives an idea about the industrial setup of the process, components and their arrangements for machine tool health monitoring. The setup is essentially recording accelerometer data at a very high sampling rate ( like audio recording-22kHz)  and then storing it. If an online analysis of this data needs to be carried out, the data needs to be put in cloud at a very fast rate without dropping connection or losing out the values.

With 16 bit ADC recording 3 channels @ 22Khz results to 48x22000=1.5KB data par second . A 20 hour of recording will result in about 950 MB data. That's extremely big for processing in the memory. Therefore often times this data is re sampled at 500 HZ-1Khz to reduce number of samples.

We have provided some sample recording with this article which could be helpful for you to analyze the nature and complexity of data.

Image 23

Figure 3.18 : Sample 3-Axis Industrial Accelerometer Data 

The data essentially contains three columns that records x,y and z axis acceleration respectively. As the volume of data is large, we need a special measurement to storage data mitigation service which is commonly known as data streaming. Here is a good explanation of the streaming data services by Official Amazon Web service Explanation of Data Streaming

It is now clear that such data mitigation requires chunk wise data transmission where chunks are identified with a time stamp. An idustrial IoT data streaming service must adhere to this basic principle. So, the architecture behind cloud based recording of our machine tool health monitoring data is that, the system must :

  1.  record the data locally
  2. open a connection with the server
  3. send a set of data at a stretch with a timestamp identification
  4. Server must store the data in chunks and must offer the analytic service this acceess of data in chunk

 

However, one of the biggest faults of most of today's so called "Cloud for IoT" service provider is that they assume that each data is an independent time stamp. Which means that data is perceived as a single sample over an instance of time.That's where the problem starts. Azure, Bluemix and allied IoT cloud storage/service providers use some form or the other Mqtt or Websockets. Data can be sent as a series but are time stamped serially over the time. GE Predix scores over others in this regard. It provides an option of logical abstraction of the machines through it's Asset model services and data is streamed in chunks with a unique ID to each chunk and a time series associated with the individual values or overall chunk itself. 

It provides a Microservice based architecture where each independent components can be developed and deployed individually.

Further, it is based on Amazon Web services, therefore scaling is never a problem. You can pay as you scale. The platform provides a secured data access and exchange from the word go using it's cloud based User and Application Authentication Architecture called UAA.  GE predix also provides a visualization service at both logical level ( using it's Custom View service) as well as at the Presentation level ( using Angular JS based Px components). 

So you get a complete suit of tools needed for implementing an end to end data acquisition and visualization service.  But the catch is that even though the platform is good with the data acquisition and structuring industrial and machine data, it is not yet well equipped with analyzing this data with powerful machine learning tools, where IBM Bluemix and Microsoft Azure clearly score over Predix.

So, a good way of achieveing your RUL framework would be to use Predix to collect your data and Watson or Azure to analyze them. Then release this architecture as a service over the cloud which can be consumed by the applications. 

In this article, we shall focus on developing a Microservice with GE Predix to consume the data coming from the sensors. We will show the service in action with a simple simulation framework powered by Intel Edison board so that you can replicate the data gathering process in lab, test the service, scalability, roboustness and then work on the real time integration aspect.

In the next chapter, we shall elaborate in detail about GE Predix platform in general and getting started with the application development in the platform.

4. Getting Started with GE Predix

Let us look at overall Predix architecture before start our journey with Predix.

Image 24

Figure 4.1 GE Predix Architecture ( Image source- i-programmer)

As you can see that predix is based on Cloud foundry which provides easy development and deployment to cloud. At the core, the platform in itself is built on Amazon web services and your applications are hosted at Amazon web services via Predix. Predix manages the machines as Assets through sets of industrial standard tags that uniquely identifies a machine, it's geographic location, makers, properties and so on.

The heart of Predix is it's connectivity services which is essentially a protocol suite tailer made for industrial IoT.

Predix provides asset services which includes meta data management ablut the assets and data acquisition services. The analytic service includes views, auditing service and visualization services. These services can be consumed by mobile ready apps that are optimized for small form factors.

Before you start with Predix, the first thing you need a predix.io account. Create a 60 days trial account with Predix.io. Wait for an Email from Predix giving you your account details. If you do not get that soon( with every likelihood you wouldn't), write to predix team, raise support tickets, exchange some mails and then some other mails and finally you will get your account ( if you are lucky enough). At least this has been the case with our team. All three of us registered with Predix, but only Abhishek got the account. So we used his account to dig deep into Predix.

Assuming that you get an account from Predix, you can use their Predix ready development environment called Devbox. It is essentially a Unix based system. So you need  VMware if you are windows user( Cygwin doesn't work).  We did not particularly like running our development environment in VM-ware. So we went ahead with hacks and created our own development environment without Devbox( which no ways is difficult to setup but no ways an easy go either!)

As, core of GE is cloud foundry, you need to insta a Cloud Foundry client in your machine first. Also, if you are a started which in most liklihood you are if you are reading this article, then you would need plenty of sample applications provided by GE to understand how things works. Most of these are hosted in Git. So you are advised to install git in your machine too. 

Predix's Microservice framework is based on Java Springs. So you need at least JDK 1.8 installed in your machine. We advice you to install Netbeans 8 with Java for easy Microservice development.

GE's web app uses Node.js as backend, Python and variants of Angular.js as frontend. It supports npm packages and manages Node.js packages through Bower. So you need to install Node.js as well as Bower in your system. 

Here is a summery of software (or apps as they say!) before you can write your first Hello world program in Predix.

  • Git
  • Cloud Foundry
  • Node.js ( npm)
  • Bower
  • Python ( 2.7- some services run strictly on 2.7)
  • Netbeans IDE ( >8)

having done these steps, let's start the development.

Installing Cloud Foundry and Hello World

1. Install Cf from https://github.com/cloudfoundry/cli/releases

2.in command prompt type: cf login -a https://api.system.aws-usw02-pr.ice.predix.io

Image 25

Figure 4.2 Login to your Predix API using Cf

3. Download Predix Hellow World web app

https://github.com/PredixDev/Predix-HelloWorld-WebApp

4. Extract Hello world

Image 26

Figure 4.3 Hello Predix Directory Structure

5. Open manifest.yml and change application name

6. Change the content of index.html

7. cf push

Image 27

Figure 4.4 Pushing your Hello Predix app to cloud

Image 28

Figure 4.5 : App running in Cloud

8. go to your predix.io console to see the app running

Image 29

Figure 4.6 Predix Console- Hello Predix running

9.Click on the App.

Image 30

Figure 4.7 : App URl through App view in console

                   

Copy the URl. this is the URL at which your service is running.

10. Open URL ( do not forget to use https:// with this)

Image 31

Figure 4.8 Testing Hello Predix in Browser

When you call cf push command from your app directory, the buildpack seraches for .yml file. This is the configuration file for Predix web applications. At a later stage we shall also see how to add services with this App.

 

Setup maven configuration for getting access to Predix repo

As our bridge app, that gathers data from end ( or edge) hardware and sends to predix cloud is developed with java's Spring framework, you need to set up your Java development environment for Predix. The example apps that Predix provides you are all Maven apps and are linked to Maven repo. But there is a small problem with Predix. Actual Predix repo in Maven is a private repo that needs authentication by the Maven applications. This authentication token is generated through a hash from your Predix credentials. But the problem is that, it needs another Predix web app. You need to create an encrypted password from this site, add it as credentials with your maven config file to be able to populate Predix maven repos.

 

1. Open c:\users\<pc-user-name>\.m2and open settings.xml

2. Open settings.xml-> here you have to change username and encrypted password

3. Setup encrypted password

https://artifactory.predix.io/artifactory/webapp/#/login

4. Click on the top username and in the next page enter your password

Image 32

Figure 4.9 Obtaining Encrypted password for your Predix Maven repo

Download the settings.xml provided with this article, replace YOUR_PREDIX_EMAIL and YOUR_ENCRYPTED_PASSWORD with the one you have just generated.

<servers>
        <server>
            <id>predix.repo</id>
            <username>YOUR_PREDIX_EMAIL</username>
            <password>YOUR_ENCRYPTED_PASSWORD</password> <!-- Obtained from https://artifactory.predix.io -->
        </server>
    </servers>

Download settings.xml for Maven - 851 B

Creating a Time Series in Predix

We will comeback to Java Micro service development very soon. But before that you need to follow quite a few more steps. The Microservice is a secured service, which works based on secured API model of Predix. Predix provides two security services: UAA ( user account and authentication) and Authentication Service. The UAA is the most basic step towards working with Predix as the other services are all authenticated through this.

1. Open an user UAA service from Predix.io catalog

Image 33

Figure 4.10 Subscribing to UAA Service

 

 

2. Create a New Service Instance: The subscribe button will take you to to a new UAA service instance creation page. you need to specify an adminstrative password ( Admin client secret) which we have set to admin. You need to specify a sub domain which can be anything. All your apps that would utilize this authentication, needs to run with this sub domain. We are using rupam here. Give the service instance a name, which we have set to rupamUAA

Image 34

Figure 4.11 Creating New Service Instance in UAA

 

 

3. Clone time series App: Starting with GE Predix time series is certainly not a straight forward affair. I would suggest you to clone their reference app repo ( or even better, download the pre-configured sample app we have provided)

git clone https://github.com/PredixDev/timeseries-bootstrap.git

4. Build the mvn project by going inside the app directory.

mvn clean package

It will show time series build:

Image 35

Figure 4.12 Maven build of time series reference app

5. Subscribe to Time Series: Well, Predixx has it's own time series service in cloud foundry marketplace. Your time series app needs to leverage the time series service from Predix. So, just like UAA, subscribe to time series from Predix catalog. While subscribing, it is going to ask you to select an UAA. As we have already created our UAA instance by name rupamUAA, we have selected that while subscribing. Following figure shows TS service subscription through UAA 

Image 36

4.13: Subscribing to time series services with your UAA

 

6. Check time series reference architecture

https://www.predix.io/docs/#Y5J5gFHz

In order to work with Predix time series, you need to understand the architecture. The service basically have two endpoints: Ingestion service and query service. An ingestion service is a web socket based hook to data streaming predix services. In our Microservice Java app, we are going to tap into this hook and expose a service end point.

 

7. Bind both time series and uaa with your created app

cf bind-service RupamsHelloPredixWeb rupamUAA

cf bind-service RupamsHelloPredixWeb rupamTs

Replace RupamHelloPredixWeb with whatever name you have set to your Predix Hello world app. 

Image 37

Figure 4.13: Binding Service instances with main app

After service instance binding with the App, you can check the binding by going to your Predix.io console and exploring your TS, which we have named as rupamTS.

Image 38

Figure 4.14 Verifying service bind

9. Getting the App Environment variables:  Programs utilizing Predix services need App environment variables to communicate with the cloud app structure which in turn is responsible for service gateways. When you bind your UAA service with the app, your program is going to utilize these subscribed services through the environment variable.

In command prompt type: cf env RupamsHelloPredixWeb >> env.rtf  to download all environment variables into a single file called env.rtf.

The other problem with predix is that even after you have created time series with your UAA, that binding only permits your time series instance to use your UAA but does not authenticate the time series based API calls. You need to manually set the permission. This set is also called Access Grant. The service access grant needs to be performed by appending service environment variables in UAA scope, which needs you to perform with another Predix web app.

First open your UAA instance from Predix.io console, in our case, it is rupamUAA.  On the right side, you will see a 'Configure UAA Option

First you need to login as admin which in turn needs Admin URL of your UAA. 

Open env.rtf see the url of uaa and paste it in url part of Admin login.

Image 39

Figure 4.15  Setting UAA admin URL in Predix UAA web app( called Predix starter kit app)

 10. Create a UAA client: The adminstrative privillage and instance must not be used to authorize the end applications. It must be done through another instance called UAA client. Once you have logged in to your Predix starter web app select create client option. 

A client is one which will be used to generate access token for the app from uaa server. So in client, permission must be granted. This client needs the scope of the zone ids of the environment variables of the services that you want to authorize through this client.

From your env.txt Copy zone credentials of time series ( both ingestion and query),  after resource.id field in the client creation form. Which should look something as below( this shows our rupamTS with rupamUAA while creating the client).

Quote:

{"client_id":"rupam","client_secret":"rupam","scope":["uaa.none","openid"],"authorized_grant_types":["authorization_code","client_credentials","refresh_token","password"],"authorities":["openid","uaa.none","uaa.resource","timeseries.zones.68262012-bfea-4dc9-abd5-d1360153f783.user","timeseries.zones.68262012-bfea-4dc9-abd5-d1360153f783.ingest","timeseries.zones.68262012-bfea-4dc9-abd5-d1360153f783.query"],"autoapprove":["openid"]}

 

The form view is shown in figure 4.16.

Image 40

Figure 4.16 Creating new UAA Client by adding Zone IDs of time series in the scope. 

Configuring Java Spring  Microservice with new Client

 Now again go to /config/application.properties in Spring time series bootstrap app and add your client ID and client secret as shown in figure 4.17

Image 41

Figure 4.17 Configuring UAA Client  credentials in the bootstrap app.

The application.properties looks something like this:( must edit by going to project directory/config)

///--------------application.properties---------//

#properties for running locally or for unit tests

logging.level.root=INFO

logging.level.org.springframework=INFO

#<currentworkingdir>/config/application.properties are local ovverrides to src/main/resources(aka classpath)/application.properties

server.port=9000


spring.profiles.active=local


#you should change these properties to match your own UAA, ClientId, User and PredixAsset instances.

predix.oauth.certLocation=91d20548-2107-4bdb-be47-787c25ca69ac.predix-uaa.run.aws-usw02-pr.ice.predix.io

predix.oauth.tokenType=JWT

predix.oauth.resource=/oauth/token

predix.oauth.restPort=80

predix.oauth.grantType=client_credentials

predix.oauth.clientIdEncode=true

# e.g. predix.oauth.restHost=36492c1e-657c-4377-ac51-add963552460.predix-uaa.cloud.com

predix.oauth.restHost=rupam.predix-uaa.run.aws-usw02-pr.ice.predix.io

predix.oauth.clientId=rupam:rupam

predix.websocket.uri=wss://gateway-predix-data-services.run.aws-usw02-pr.ice.predix.io/v1/stream/messages

#predix.websocket.uri=wss://put.your.websocket.service.instance.here/v1/stream/messages

predix.websocket.zoneid=68262012-bfea-4dc9-abd5-d1360153f783




//////////---------------------------------------------------------------------------------//

Here websocket.uri is the time series ingestion websocket uri which will be something like wss://

oauth.clientId must be specified as client_name:client_secret format which you must note down while creating UAA client.

The corresponding env variable is given as below(so that you can match)

 

Getting env variables for app RupamsHelloPredixWeb in org abhishek.nandy81@gmail.com / space dev as abhishek.nandy81@gmail.com...
OK

System-Provided:

{
"VCAP_SERVICES": {
"predix-timeseries": [
{

"credentials": {
"ingest": {
"uri": "wss://gateway-predix-data-services.run.aws-usw02-pr.ice.predix.io/v1/stream/messages",
"zone-http-header-name": "Predix-Zone-Id",
"zone-http-header-value": "68262012-bfea-4dc9-abd5-d1360153f783",
"zone-token-scopes": [
"timeseries.zones.68262012-bfea-4dc9-abd5-d1360153f783.user",
"timeseries.zones.68262012-bfea-4dc9-abd5-d1360153f783.ingest"

]

},

"query": {

"uri": "https://time-series-store-predix.run.aws-usw02-pr.ice.predix.io/v1/datapoints",
"zone-http-header-name": "Predix-Zone-Id",
"zone-http-header-value": "68262012-bfea-4dc9-abd5-d1360153f783",
"zone-token-scopes": [
"timeseries.zones.68262012-bfea-4dc9-abd5-d1360153f783.user",
"timeseries.zones.68262012-bfea-4dc9-abd5-d1360153f783.query"
]

}

},

"label": "predix-timeseries",
"name": "rupamTs",
"plan": "Tiered",
"provider": null,
"syslog_drain_url": null,
"tags": [
"timeseries",
"time-series",
"time series"
]

}

],

"predix-uaa": [

{
"credentials": {
"issuerId": "https://rupam.predix-uaa.run.aws-usw02-pr.ice.predix.io/oauth/token",
"subdomain": "rupam",
"uri": "https://rupam.predix-uaa.run.aws-usw02-pr.ice.predix.io",
"zone": {
"http-header-name": "X-Identity-Zone-Id",
"http-header-value": "508a9e67-4ff4-41c5-8851-45e20124be4d"
}

},

"label": "predix-uaa",
"name": "rupamUAA",
"plan": "Tiered",
"provider": null,
"syslog_drain_url": null,
"tags": []
}

]

}

}


{

"VCAP_APPLICATION": {

"application_id": "5654088b-c7a3-44d5-b054-c7be41aa7911",
"application_name": "RupamsHelloPredixWeb",
"application_uris": [
"rupamshellopredixweb.run.aws-usw02-pr.ice.predix.io"
],

"application_version": "5e55585e-1d93-414f-bc60-2051210ad4f3",

"limits": {
"disk": 1024,
"fds": 16384,
"mem": 64
},

"name": "RupamsHelloPredixWeb",
"space_id": "cf1c5b44-e183-48cb-8229-39d69f6d27b9",
"space_name": "dev",
"uris": [
"rupamshellopredixweb.run.aws-usw02-pr.ice.predix.io"
],

"users": null,
"version": "5e55585e-1d93-414f-bc60-2051210ad4f3"
}

}
No user-defined env variables have been set
No running env variables have been set
No staging env variables have been set


///////////////////////////////////////////////////////////////////

If you now compile the time series bootstrap App, the app will be compiled. 

 

however, this microservice that Predix provides doesn't come with any API end points!!!!!

What does that mean? That means that the services can not be called or utilized from any devices because even after you deploy it to cloud, they are just services which can be called by other services and not by any device.

But, we want our Industrial Accelerometer to be able to put data into Predix cloud using time series service. Right?

For that we need to define the API End point for the microservice. So , I have added the endpoint to expose our bootstrap microservice to outside world using standard Spring methods.

The main beans class provided by bootstrap application is WebServiceClientImpl.java class which defines methods for connecting and streaming data to Predix cloud.

  • postTextWSData
  • postTextArrayWSData
  • postBinaryWSData

Are the sample methods provided with the implemeentation. however, we want to stream a time series data and not the binary or text data.

Interestingly the HelloController class under  com.ge.predix.solsvc.service implements the methods that you need to look into.

postDataTest method is the method that allows you to send time series data ( single entry to Predix cloud) using WebSocket client.

However, you can not send data of any arbitory format. You need to send a JSON data, that too specific to Predix supported format, which includes messageID, interval, number of fileds and tag. tag is the field using which the query service identifies your data series.

So, we update the method such that it supports Predix specific data entry as bellwo.

@Autowired
protected RestClient restClient;
@Autowired
private WebSocketClient client;

@Value("${predix.websocket.uri}")
private String injectionUri;

@Value("${predix.websocket.zoneid}")
private String zoneId;

    public String postDataTest(String name, String value) {

    //headers required for authentication and for predix service
    List<Header> headers = this.restClient.getSecureTokenForClientId();
    headers.add(new BasicHeader("Predix-Zone-Id", this.zoneId)); //$NON-NLS-1$
    // Origin header required as it is not being set by the websocket
    headers.add(new BasicHeader("Origin", "http://localhost")); //$NON-NLS-1$ //$NON-NLS-2$
    WebSocket ws = null;
            String s="";
    //connect to websocket once
    try {

        ws = this.client.connectToWS(this.injectionUri, headers);

        //post data multiple times
                    long millis = new java.util.Date().getTime();
        String testMessage1 = "{\"messageId\": \" R"+millis+"\",\"body\": [{\"name\": \""+name+ "\",\"datapoints\": [["+millis+","+value+",3]],\"attributes\": {\"host\": \"rupamServer\",\"customer\": \"rupam\"}}]}"; //$NON-NLS-1$
        this.client.postTextWSData(ws,testMessage1);
        s="Data Posted.. mId="+millis+" tag="+name+"value="+value+" ac msg="+testMessage1+"url="+this.injectionUri+"zone="+this.zoneId;



    } catch (IOException e) {
        fail("Failed to connect to WS due to IOException."+e.getMessage()); //$NON-NLS-1$
        e.printStackTrace();
                    s= "Could not post data"+e.getMessage();
    } catch (WebSocketException e) {
        fail("Failed to connect to WS due to WebSocketException."+e.getMessage()); //$NON-NLS-1$
                    s= "Could not post data"+e.getMessage();
    }


    try {//wait added for time delay in callback from websocket endpoint
        Thread.sleep(2000);
    } catch (InterruptedException e) {
        fail("Failed due to thread interruption."+e.getMessage()); //$NON-NLS-1$
    }

    //disconnect once after posting all messages
    try {
        this.client.disconnectFromWS(ws);
    } catch (IOException e) {
        fail("Failed to connect to WS due to WebSocketException."+e.getMessage()); //$NON-NLS-1$
    }
            return s;
}

The next part is to define an API endpoint.

This is done by using the request mapping technique:

@RequestMapping("/hello")
   public String hello(@RequestParam(value="name",defaultValue="rupam") String name,@RequestParam(value="value",defaultValue="1") String value) {
       return "hello "+ postDataTest(name, value)+ "</br>Greetings from Predix Spring Boot! " + (new Date());
   }

So, your_predix_web_app_url/hello?name=field_name&value=field_value becomes your complete get URl for calling the service which will internally call postDatatest which will format the data according to predix specific format.

Execute the Netbeans project. By default the local service will run at port 9000. So in browser if you enter,

localhost:9000/hello?name=acc_x&value=33

It's going to enter 33 against a tag named acc_x.

Having tested the data locally, all you need to do is to deploy the app in the cloud. Go to the bootstrap app directory and use cf push.

But before you are able to push the app to Predix cloud, you need to configure this and update the pom.xml in the project direcory. Here is the pom of our sample project.

Image 42

Figure 4.18: Data storage through Local API Endpoint( Click to enlarge)

 

 Now Deploy the Solution to Cloud

 

We need to use cf push for pushing the project to cloud.

Go to project directory, that contains pom.xml, manifest.yml. cf reads the configuration from manifest.yml. So open your pom, see the artifact id and put it in yml file. Then go to target directory, see the .jar file and enter the name as target.

here is pom.xml snippet

<?xml version="1.0" encoding="UTF-8"?>

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

<modelVersion>4.0.0</modelVersion>


<groupId>com.ge.predix.solsvc</groupId>

<artifactId>rupam-time-series</artifactId>

<version>1.1.6</version>

<parent>

<groupId>org.springframework.boot</groupId>

<artifactId>spring-boot-starter-parent</artifactId>

<version>1.2.3.RELEASE</version>

<relativePath /> <!-- lookup parent from repository -->

 

You need to change the yml file accordingly

Here is the yml file:

---

applications:

- name: rupam-time-series

buildpack: java_buildpack

path: target/rupam-time-series-1.1.6.jar

spring:

profiles: development

server:

port: 9000


---


spring:

profiles: production

server:

port: 0

Important : Make sure Server port:0 in yml file.

Also go to src folder . src/resources/application.properties: This is production properties. Set server port to 0 here and make sure that this also has the same credentials.

path should be relative path to the compiled jar file which generally will be in target directory. Server port=0 helps the cloud to configure the server port to the available port rather than binding to a specific port which might not always be available due to other services.

Go to console and see cloud url.

Image 43

Figure 4.19: Cloud API End point for your data ingestion service

 

You can push the data the way you have pushed locally or use our API endpoint ( till our free trail lasts)

https://rupam-time-series1.run.aws-usw02-pr.ice.predix.io/hello?name=rupam&value=51.9 ( change value and tag)

Alright, we are able to call the end point using browser. But how to do it from our IoT device? Well Unix provides a great command called curl using which you can call any websites using your command prompt.

Calling the service from command prompt

This service will be called by our IoT device using curl utility. So in command prompt type curl followed by the above url and check if the data is posted or not.

Validating Data

You can go to API explorer section in predix starter web. Put the zone id of the time series query in the zone id field, select time-dependent data. It will by default open a JSON format for 1y data. Just change the name and limit to finally see your hardwork bearing fruit :)

Image 44

Figure 4.20 Validating data ingestion using Query

 

Essentially the application must also have a visualization client. But the complexity of the development with Predix ecosystem is beyond the scope of this article. You can obtain the data through API explorer and parse using any JSON parser to get the data array which you can save in any format.

If you want to know how the end visualization looks in predix youi can open the sample running example provided by predix

https://rmd-ref-app.run.aws-usw02-pr.ice.predix.io/dashboard

Quote:

Username: app_user_1

Password:app_user_1

You can see the UI app running as shown in figure bellow

Image 45

Figure 4.21 Sample RMD reference app result showing live data monitoring 

The app also points to it's GitHub repo. You can download the RMD app and build it. we avoid it because covering all the aspects that are needed to build the app is outside the scoepe of thsi particular article.

 

The question is if you want to test the services how you are going to do? Somethimes an access of lathe and industry standard monitoring service is not always available. However, you can regenerate the use case using some simple hardware setup. We present an Intel Edison based environment using which we can send data to our created Predix data ingestion endpoint.

5. Intel Edison Based Simulation For Data Ingestion

Irrespective of who you are and what level of experties you have, if you are working with IoT, nothing gives you as much pleasure as  real time data mitigation to cloud gives. Partially because IoT afterall is "Connecting the things" and how do you say that things are connected, if they are not on cloud?

The data push through real devices serves many other purposes than just satisfying the developer. It can act as a test bench for studying latency, roboustness and throughput of the system. Therefore we though that it would be a goodf idea if we can replicate the scenerio that we developed in section 3 so that developers can reproduce the real use case.

As we have learnt from section 3 that RUL prediction can be essentially performed using industial grade 3 axis accelerometer. But don'w worry if you don't have it. You can always use low cost IoT prototyping boards like Raspberry Pi or Intel Edison for this.

 

Image 46

Figure 5.1 Accelerometer connection with Edison 

Connecting accelerometer with Edison is real simple. Just connect it to the I2C port of Grove base shield as shown above.

Node.js has Edison Accelerometer package 

jsupm_mma7660

Install the package with npm. The example code for logging accelerometer data is present with the package.

We are going to fork the code and add curl command to push the data to Predix cloud.

var digitalAccelerometer = require('jsupm_mma7660');

var myDigitalAccelerometer = new digitalAccelerometer.MMA7660(

 digitalAccelerometer.MMA7660_I2C_BUS,

digitalAccelerometer.MMA7660_DEFAULT_I2C_ADDR);

myDigitalAccelerometer.setModeStandby();

myDigitalAccelerometer.setSampleRate(digitalAccelerometer.MMA7660.AUTOSLEEP_64);

myDigitalAccelerometer.setModeActive();

var ax, ay, az;

ax = digitalAccelerometer.new_floatp();

ay = digitalAccelerometer.new_floatp();

az = digitalAccelerometer.new_floatp();

var outputStr;

var myInterval = setInterval(function()

{

myDigitalAccelerometer.getAcceleration(ax, ay, az);

outputStr = "Acceleration: x = "

+ roundNum(digitalAccelerometer.floatp_value(ax), 6)

+ " y = " + roundNum(digitalAccelerometer.floatp_value(ay), 6)

+ " z = " + roundNum(digitalAccelerometer.floatp_value(az), 6);

console.log(outputStr);
},50);

 

function roundNum(num, decimalPlaces)

{

var extraNum = (1 / (Math.pow(10, decimalPlaces) * 1000));

return (Math.round((num + extraNum)

* (Math.pow(10, decimalPlaces))) / Math.pow(10, decimalPlaces));

}

The method setInterval keeps looping in every 50ms ( therefore producing a samplig rate of 20Hz, too low for comfort!) digitalAccelerometer.floatp_value(ax), digitalAccelerometer.floatp_value(ay), digitalAccelerometer.floatp_value(az) Gives the acceleration across three axes.

Use anaother npm package called sys, that allows you to call Unix system calls from Node.js. Use child_process to execute the system call as a separate thread.

 

var sys = require('sys')

var exec = require('child_process').exec;
function puts(error, stdout, stderr) { sys.puts(stdout) }

Let's assume that we want to store x-acceleration

var s="curl \"<a href="https://rupam-time-series1.run.aws-usw02-pr.ice.predix.io/hello?name=rupam&value=51.9">https://rupam-time-series1.run.aws-usw02-pr.ice.predix.io/hello?name=acc_x&value="</a>+roundNum(digitalAccelerometer.floatp_value(ay), 2)+ "\"";
exec(s);

That's it. It's going to put the data in your predix cloud. You can modify the Microservice to post multiple data simultaneously as chunk as we have discussed at the begnning.

 

Now you can simulate the system and can see the data. Use any standard programming lanuages to consume the REST APIs from query service to obtain the result data which you can use for your analytical framework.

6. Conclusion

In this tutorial we have elaborated the process of analyzing tool lifetime using accelerometer data in real industrial setup. We have presented the sensors and other components along with the industrial setup and standards for data acquisition. Then we have introduced Predix time series based data streaming and ingestion services for taking the data collection process to cloud. We then presented a simulation driven method to simulate the industrial setup and test the service with Intel Edison.

We have also provided the recorded dataset, 003 is first row of table 3.5, 004 is next row and so on. You can utilize the RTF data to manually upload to a machine learning framework like Azure and then build your RUL system.

Hope, this tutorial helps you getting started with IIoT and make it easy for you to abstract the logical models of the learning curve and help you easily to build your IIoT architecture.

 

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Share

About the Authors

Grasshopper.iics
CEO Integrated Ideas
India India
gasshopper.iics is a group of like minded programmers and learners in codeproject. The basic objective is to keep in touch and be notified while a member contributes an article, to check out with technology and share what we know. We are the "students" of codeproject.

This group is managed by Rupam Das, an active author here. Other Notable members include Ranjan who extends his helping hands to invaluable number of authors in their articles and writes some great articles himself.

Rupam Das is mentor of Grasshopper Network,founder and CEO of Integrated Ideas Consultancy Services, a research consultancy firm in India. He has been part of projects in several technologies including Matlab, C#, Android, OpenCV, Drupal, Omnet++, legacy C, vb, gcc, NS-2, Arduino, Raspberry-PI. Off late he has made peace with the fact that he loves C# more than anything else but is still struck in legacy style of coding.
Rupam loves algorithm and prefers Image processing, Artificial Intelligence and Bio-medical Engineering over other technologies.

He is frustrated with his poor writing and "grammer" skills but happy that coding polishes these frustrations.
Group type: Organisation

116 members


Abhishek Nandy
Software Developer
India India
I am into software Development for less than a year and i have participated in 2 contests here at Codeproject:-Intel App Innovation Contest 2012 and Windows Azure Developer Challenge and been finalist at App Innovation contest App Submission award winner as well won two spot prizes for Azure Developer Challenge.I am also a finalist at Intel Perceptual Challenge Stage 2 with 6 entries nominated.I also won 2nd prize for Ultrabook article contest from CodeProject
Link:-
http://www.codeproject.com/Articles/523105/Ultrabook-Development-My-Way

Microsoft MVA Fast Track Challenge Global Winner.
Ocutag App Challenge 2013 Finalist.

My work at Intel AppUp Store:-

UltraSensors:-
http://www.appup.com/app-details/ultrasensors
UltraKnowHow:-
http://www.appup.com/app-details/ultraknowhow

Moumita Das
Software Developer Integrated Ideas
India India
No Biography provided

Comments and Discussions

 
QuestionPredix Machine with MTConnect? Pin
Member 1019550521-Mar-17 3:17
professionalMember 1019550521-Mar-17 3:17 
Questiondownload link not working? Pin
Member 1019550521-Mar-17 3:02
professionalMember 1019550521-Mar-17 3:02 
AnswerRe: download link not working? Pin
Member 131746715-May-17 0:52
MemberMember 131746715-May-17 0:52 
QuestionData from Machining Pin
Member 128966079-Dec-16 12:15
MemberMember 128966079-Dec-16 12:15 
AnswerRe: Data from Machining Pin
OriginalGriff9-Dec-16 12:18
communityengineerOriginalGriff9-Dec-16 12:18 
QuestionFinally images are fixed Pin
Grasshopper.iics6-Aug-16 11:27
MemberGrasshopper.iics6-Aug-16 11:27 
AnswerRe: Finally images are fixed Pin
Member 131746714-Jul-19 5:36
MemberMember 131746714-Jul-19 5:36 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

Article
Posted 6 Aug 2016

Stats

23.1K views
112 downloads
7 bookmarked