Data Extraction and

Tvarit brings know-how of various data sources available in a factory. They can be Historians, Energy Meters, Lab Quality Systems, MES, ERP, PLCs, SpreadSheets, Logs, SQL, PCs, Batch Reports, MTConnect, OPC-UA, etc. Data Extraction Consultancy is categorized mainly into five parts.

  • Control Systems Data Extraction: Tvarit expertise helps you extract the data from various hardware controller systems such as PLC, SCADA, DCS, etc. 
  • Integration via Bus Systems: Tvarit experts bring the know-how of various bus systems such as J-bus, Mod-bus, ProfiBus, EthNet, EthCat, CAN bus, etc. 
  • IT Systems: Integration with IT systems is done either using REST/SOAP APIs or JDBC/ODBC connectors are utilized. Some of the systems we have experience in are SAP ERP, PP, WM, MES, SPS Control, APC, etc.
  • Data Storage Systems Integration: We support our clients with various DBMS technologies such as MySQL, Postgres, NoSQL databases like MongoDB, InfluxDB, ElasticSearch, Hadoop, etc. Further, we help them set up data lakes and data warehouses using various DBMS technologies.
  • E. Communication Protocols Integration: This section includes the integration of the data stream via various data communication protocols such as TCP/IP, FTP, OPC-UA, MQTT, MTConnect, PI AI systems, etc.
Data Preparation

The data which is collected from different sources may have dirty data, cleaning of data should be done before the data is loaded. The problem with polluted data is that there is no fixed way of dealing with it, and the problem is universal. The polluted values affect our performance and predictive capacity. They have the potential to change all our statistical parameters. The way they interact with outliers once again affect our statistics. Conclusions can thus be misleading.

  • There can be various causes of the bad and dirty data:
  • Bug in PLC due to power failures which gives rise to missing data.
  • Wrong configuration of machine controllers gives rise to out-of-permissible values for a sensor.
  • Network issues like 3G, 4G, Wi-Fi etc. give rise to incomplete data.
  • Wrong queries written for extracting the data from databases.
  • Bugs arising while merging the data coming from multiple data sources.

Many times, work order or product quality results are being captured manually, whereas automated systems are in place for sensors data, so combining the data creates logs of bad data

Tvarit experts comes with a very powerful data clean up tool kit. It includes pre-written data-cleanup algorithmic modules such as sanity handling, missing handling, multicollinearity analysis, mahalanobis distance, data distribution check, infer best bucket etc. Once the data have been cleaned, it will produce precise results when the ML/DL algorithms are applied. Hence consistent data is essential and reliable for decision making. We at Tvarit sanitise the data as surgically as possible to obtain the best possible solution.

Data Labelling
Tvarit provides Managed Data Labelling teams. Enrich your massive amounts of data in a transparent and agile approach with high levels of accuracy, consistency and speed. We provide labelling to all kinds of data image, text, video, sensor and time-series data.
Data Harmonization

The wave of Digitization and Data collection in the past couple of years has forced every single company to focus on Data Collection. The biggest pain point of manufacturing companies as of today is to figure out which data is most fruitful. Further big data is being produced from Machinery as well, as thousands of sensors in your plant collect the data at the rate of every 1 second, sometimes even 1 millisecond. Real power lies in “The Fruitful Data”, not in “Big Data”.

Intelligent Transformations such as FFT, Wavelet, Approximate Entropy etc can be applied on high-frequency data. For example, you are capturing Vibration data from a CNC Machine Spindle at the rate of 2KHZ which translates to a couple of GBs within a day. Applying “slot aggregation” becomes much easier as you can easily see that ~99% of times your CNC Machine Spindle is behaving normally and this “normal” data can be safely aggregated to the higher bucket (say 1 data point every 1 min), assuming no information loss. Now, the rest of the ~1% of the time, your CNC Machine Spindle is capturing Anomalies (during worn-out conditions or tool breaking conditions) which should not be aggregated at all, as that is “the Fruitful Data” and dropping the same will lead to information loss. This will allow this Data Compression from a couple of GBs to couple of MBs without compromising accuracy.

AI Powered data recommendation system

Tvarit Experts have prior experience in process engineering plants where calculation of precise set point of various parameters is very important to avoid any future anomaly. Tvarit data scientists has built ML/DL assisted recommendation engine to achieve that. Further, the Confidence Level of these AI predicted/prescribed setpoints are given while recommending users (shop floor engineers) with these action items. Limits of input tweakable parameters are taken into consideration while creating these recommendation engines. Hence domain knowledge is incorporated in ML/DL model and provision to users with sensible action items is ensured.

The change we brought
0 +
Most advanced ready to use AI modules for manufacturing data analytics
0 %
Accuracy of APA models
Time of Transfer Learning from 1 to n Machines
0 Mins
To Build your AI model
Our Proven Results
0 %
Increase in OEE
0 %
Decrease in delivery time
0 %
Decrease in energy costs
0 %
Reduction in quality defects