{"project":"55848969d57d770d00343a62","_id":"5784e66c5ae9c20e00bc2547","user":{"_id":"560527ecf6b86e0d00284ac0","username":"","name":"Kseniya Savitsina"},"initVersion":{"_id":"5606c4ea3a93940d002b3eaa","version":"2.0"},"__v":0,"createdAt":"2016-07-12T12:45:32.519Z","changelog":[],"body":"The DeviceHive project development team has explored the possibility of integrating the DeviceHive data platform with SAP Hana DB. DeviceHive allows third party IoT device developers to concentrate on business tasks and the distinctive features of a project by shifting the data and device management onto the platform. SAP Hana DB provides analytical tools integrated with data storage. \n\nThis approach can be used as a reference architecture solution based on devices that use Ubuntu Core. In our example, the devices collect data from the sensors and transmit it to the cloud for further analysis.\n\nThe DeviceHive platform allows us to collect data sent by devices in various ways. One of the most convenient ways that is available immediately after server installation is to use the data stream in an Apache Kafka server. The data flow available from Kafka topics contains device notifications. Aggregation and data analysis received from the sensors on the fly makes it possible to create a real-time monitoring system.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/bJNdhmwTaRYnzndxlsQX_01.png\",\n        \"01.png\",\n        \"975\",\n        \"761\",\n        \"#74a669\",\n        \"\"\n      ]\n    }\n  ]\n}\n[/block]\nWe discovered several possible ways to integrate DeviceHive with SAP:\n1.\tUse SAP HANA Smart Data Streaming and implement a custom data adaptor reading from Kafka;\n2.\tUse SPARK as a bridge between Kafka and SAP Hana;\n3.\tWrite a custom application for the SAP Cloud platform, which reads directly from the remote Kafka queue and stores data in the local SAP Hana DB.\n\n**SAP SDS**\n\nIn the case of SAP Hana, the solution can use one of several available data-ingestion mechanisms. The SDS package is native to the SAP platform and can be used to develop its own Data Adapter. This solution makes it possible to create a comprehensive real-time analytics system using the rich set of features provided by the Smart Data Streaming solution. \n\nThis approach seemed to be the best since it requires only one integration point (class) in this case. The adaptor implemented will be configurable and reusable and the processing of thresholds will be performed inside Smart Data Streaming.\n\nHowever, this approach has its negative aspects as well: a license is required to use Smart Data Streaming. If the user already have a license, then this approach is definitely the best one.\n\n**Spark Streaming**\n\nAn alternative solution would be to use Spark Streaming, which in turn is similar to SDS in terms of functionality. Spark Streaming makes it possible to use general-purpose programming languages, as well as built-in libraries. Since built-in libraries can be uses, a Spark-based solution allows complex analytical computations and integration of the data flow with other systems. \n\nArchitecturally, HANA’s Smart Data Access Component connects to Spark SQL through the ODBC driver and issues SQL queries for retrieving data. The resulting data set is treated as a remote table in HANA, and it can be used as the input for all advanced analytics functionalities in HANA, which includes modeling views, SQL Script procedures, predictive text analytics, and so on.\n\n**Custom Application**\n\nThis is the simplest approach that does not require any additional software access/changes in the SAP cloud platform. In this case, a Kafka connector will be a part of a web application that visualizes device notifications and alert data.\n\nIn our work we use the simplest approach as an example of integration, leaving further research and development a possibility by highlighting alternative connectivity options.\n\n**Implementation**\n\nFor our R&D we chose the custom application approach since it's based on open source solutions and gives more integration freedom. We didn't have any specific requirements limiting us, so in this case this option appeared to be more universal and faster. However, we encourage our users to consider the other options as well, according to the project requirements.\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/ov9CvkfARAsT5Ku7LzKF_02.png\",\n        \"02.png\",\n        \"975\",\n        \"430\",\n        \"#dbdbdb\",\n        \"\"\n      ]\n    }\n  ]\n}\n[/block]\nThe above diagram shows DeviceHive sending sensor diagnostic information to the Kafka Notifications queue. SPARK processes the Notifications queue data and puts the results back into the Alerts queue. Two Kafka connectors on the SAP cloud platform listen to both Notification & Alert queues and put the data into Hana DB tables. The Web UI retrieves data from those tables through Ajax calls to *UiServlet* and displays charts using the D3 JS library.\n\nHibernate was used as the DB interaction layer with two Entity classes:\n\nThe notification class for device diagnostic information mapping, *\"LatestNotifications\"* is used to display filtered information for the device on the UI with the given GUID within the requested time period.\n\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"@Entity\\n@Table(name = \\\"Notification\\\")\\n@NamedQueries({\\n      @NamedQuery(name = \\\"AllNotifications\\\", query = \\\"select n from Notification n\\\"),\\n      @NamedQuery(name = \\\"LatestNotifications\\\",\\n              query = \\\"select n from Notification n \\\" +\\n                      \\\" where n.timestamp > :fromTime \\\" +\\n                      \\\"   and n.deviceGuid = :deviceId \\\" +\\n                      \\\" order by n.timestamp\\\")\\n})\\npublic class Notification {\\n  @Id\\n  @GeneratedValue\\n  private Long id;\\n\\n  @Basic\\n  private String deviceGuid;\\n  @Basic\\n  private String notification;\\n  @Basic\\n  private Timestamp timestamp;\\n  @Basic\\n  private String mac;\\n  @Basic\\n  private String uuid;\\n  @Basic\\n  private Double value;\\n  ...\\n}\\n\",\n      \"language\": \"java\"\n    }\n  ]\n}\n[/block]\nThe alert class mapping of alerts / thresholds *\"LatestAlerts\"* is used to filter out Alerts for a device with the given GUID within the requested time period.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"@Entity\\n@Table(name = \\\"Alert\\\")\\n@NamedQueries({\\n      @NamedQuery(name = \\\"AllAlerts\\\", query = \\\"select a from Alert a\\\"),\\n      @NamedQuery(name = \\\"LatestAlerts\\\",\\n              query = \\\"select a from Alert a \\\" +\\n                      \\\" where a.timestamp > :fromTime \\\" +\\n                      \\\"   and a.deviceGuid = :deviceId \\\" +\\n                      \\\" order by a.timestamp\\\")\\n})\\npublic class Alert {\\n  @Id\\n  @GeneratedValue\\n  private Long id;\\n\\n  @Basic\\n  private String deviceGuid;\\n\\n  @Basic\\n  private Timestamp timestamp;\\n  ...\\n}\\n\\n\",\n      \"language\": \"java\"\n    }\n  ]\n}\n[/block]\nDon’t forget to add a binding between the DB schema and Java Application to a connection to Hana DB.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"public class LocalEntityManagerFactory {\\n  private static EntityManagerFactory emf;\\n\\n  public static void init() {\\n      if (emf == null) {\\n          try {\\n              InitialContext ctx = new InitialContext();\\n              DataSource ds = (DataSource) ctx.lookup(\\\"java:comp/env/jdbc/DefaultDB\\\");\\n\\n              Map properties = new HashMap();\\n              properties.put(PersistenceUnitProperties.NON_JTA_DATASOURCE, ds);\\n              emf = Persistence.createEntityManagerFactory(\\\"kafka-consumer\\\", properties);\\n              System.out.println(\\\"EMF created\\\");\\n          } catch (NamingException e) {\\n              e.printStackTrace();\\n          }\\n      }\\n  }\\n\\n\",\n      \"language\": \"java\"\n    }\n  ]\n}\n[/block]\nMessage consumption from Kafka:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"public class ConsumerGroup {\\n\\n  ...\\n\\n  public ConsumerGroup(String zookeeper, String groupId, String notificationTopic,\\n                       String alertTopic) {\\n      notificationConsumer = Consumer.createJavaConsumerConnector(\\n              createConsumerConfig(zookeeper, groupId));\\n      alertConsumer = Consumer.createJavaConsumerConnector(\\n              createConsumerConfig(zookeeper, groupId));\\n      this.notificationTopic = notificationTopic;\\n      this.alertTopic = alertTopic;\\n  }\\n\\n  ...\\n\\n  private static ConsumerConfig createConsumerConfig(String zookeeper, String groupId) {\\n      Properties props = new Properties();\\n      props.put(\\\"zookeeper.connect\\\", zookeeper);\\n      props.put(\\\"group.id\\\", groupId);\\n      props.put(\\\"key.deserializer\\\", StringDeserializer.class.getName());\\n      props.put(\\\"value.deserializer\\\", StringDeserializer.class.getName());\\n      return new ConsumerConfig(props);\\n  }\\n}\\n\\n\",\n      \"language\": \"java\"\n    }\n  ]\n}\n[/block]","slug":"devicehive-predictive-maintenance-application-with-sap","title":"DeviceHive Predictive Maintenance Application with SAP"}

DeviceHive Predictive Maintenance Application with SAP


The DeviceHive project development team has explored the possibility of integrating the DeviceHive data platform with SAP Hana DB. DeviceHive allows third party IoT device developers to concentrate on business tasks and the distinctive features of a project by shifting the data and device management onto the platform. SAP Hana DB provides analytical tools integrated with data storage. This approach can be used as a reference architecture solution based on devices that use Ubuntu Core. In our example, the devices collect data from the sensors and transmit it to the cloud for further analysis. The DeviceHive platform allows us to collect data sent by devices in various ways. One of the most convenient ways that is available immediately after server installation is to use the data stream in an Apache Kafka server. The data flow available from Kafka topics contains device notifications. Aggregation and data analysis received from the sensors on the fly makes it possible to create a real-time monitoring system. [block:image] { "images": [ { "image": [ "https://files.readme.io/bJNdhmwTaRYnzndxlsQX_01.png", "01.png", "975", "761", "#74a669", "" ] } ] } [/block] We discovered several possible ways to integrate DeviceHive with SAP: 1. Use SAP HANA Smart Data Streaming and implement a custom data adaptor reading from Kafka; 2. Use SPARK as a bridge between Kafka and SAP Hana; 3. Write a custom application for the SAP Cloud platform, which reads directly from the remote Kafka queue and stores data in the local SAP Hana DB. **SAP SDS** In the case of SAP Hana, the solution can use one of several available data-ingestion mechanisms. The SDS package is native to the SAP platform and can be used to develop its own Data Adapter. This solution makes it possible to create a comprehensive real-time analytics system using the rich set of features provided by the Smart Data Streaming solution. This approach seemed to be the best since it requires only one integration point (class) in this case. The adaptor implemented will be configurable and reusable and the processing of thresholds will be performed inside Smart Data Streaming. However, this approach has its negative aspects as well: a license is required to use Smart Data Streaming. If the user already have a license, then this approach is definitely the best one. **Spark Streaming** An alternative solution would be to use Spark Streaming, which in turn is similar to SDS in terms of functionality. Spark Streaming makes it possible to use general-purpose programming languages, as well as built-in libraries. Since built-in libraries can be uses, a Spark-based solution allows complex analytical computations and integration of the data flow with other systems. Architecturally, HANA’s Smart Data Access Component connects to Spark SQL through the ODBC driver and issues SQL queries for retrieving data. The resulting data set is treated as a remote table in HANA, and it can be used as the input for all advanced analytics functionalities in HANA, which includes modeling views, SQL Script procedures, predictive text analytics, and so on. **Custom Application** This is the simplest approach that does not require any additional software access/changes in the SAP cloud platform. In this case, a Kafka connector will be a part of a web application that visualizes device notifications and alert data. In our work we use the simplest approach as an example of integration, leaving further research and development a possibility by highlighting alternative connectivity options. **Implementation** For our R&D we chose the custom application approach since it's based on open source solutions and gives more integration freedom. We didn't have any specific requirements limiting us, so in this case this option appeared to be more universal and faster. However, we encourage our users to consider the other options as well, according to the project requirements. [block:image] { "images": [ { "image": [ "https://files.readme.io/ov9CvkfARAsT5Ku7LzKF_02.png", "02.png", "975", "430", "#dbdbdb", "" ] } ] } [/block] The above diagram shows DeviceHive sending sensor diagnostic information to the Kafka Notifications queue. SPARK processes the Notifications queue data and puts the results back into the Alerts queue. Two Kafka connectors on the SAP cloud platform listen to both Notification & Alert queues and put the data into Hana DB tables. The Web UI retrieves data from those tables through Ajax calls to *UiServlet* and displays charts using the D3 JS library. Hibernate was used as the DB interaction layer with two Entity classes: The notification class for device diagnostic information mapping, *"LatestNotifications"* is used to display filtered information for the device on the UI with the given GUID within the requested time period. [block:code] { "codes": [ { "code": "@Entity\n@Table(name = \"Notification\")\n@NamedQueries({\n @NamedQuery(name = \"AllNotifications\", query = \"select n from Notification n\"),\n @NamedQuery(name = \"LatestNotifications\",\n query = \"select n from Notification n \" +\n \" where n.timestamp > :fromTime \" +\n \" and n.deviceGuid = :deviceId \" +\n \" order by n.timestamp\")\n})\npublic class Notification {\n @Id\n @GeneratedValue\n private Long id;\n\n @Basic\n private String deviceGuid;\n @Basic\n private String notification;\n @Basic\n private Timestamp timestamp;\n @Basic\n private String mac;\n @Basic\n private String uuid;\n @Basic\n private Double value;\n ...\n}\n", "language": "java" } ] } [/block] The alert class mapping of alerts / thresholds *"LatestAlerts"* is used to filter out Alerts for a device with the given GUID within the requested time period. [block:code] { "codes": [ { "code": "@Entity\n@Table(name = \"Alert\")\n@NamedQueries({\n @NamedQuery(name = \"AllAlerts\", query = \"select a from Alert a\"),\n @NamedQuery(name = \"LatestAlerts\",\n query = \"select a from Alert a \" +\n \" where a.timestamp > :fromTime \" +\n \" and a.deviceGuid = :deviceId \" +\n \" order by a.timestamp\")\n})\npublic class Alert {\n @Id\n @GeneratedValue\n private Long id;\n\n @Basic\n private String deviceGuid;\n\n @Basic\n private Timestamp timestamp;\n ...\n}\n\n", "language": "java" } ] } [/block] Don’t forget to add a binding between the DB schema and Java Application to a connection to Hana DB. [block:code] { "codes": [ { "code": "public class LocalEntityManagerFactory {\n private static EntityManagerFactory emf;\n\n public static void init() {\n if (emf == null) {\n try {\n InitialContext ctx = new InitialContext();\n DataSource ds = (DataSource) ctx.lookup(\"java:comp/env/jdbc/DefaultDB\");\n\n Map properties = new HashMap();\n properties.put(PersistenceUnitProperties.NON_JTA_DATASOURCE, ds);\n emf = Persistence.createEntityManagerFactory(\"kafka-consumer\", properties);\n System.out.println(\"EMF created\");\n } catch (NamingException e) {\n e.printStackTrace();\n }\n }\n }\n\n", "language": "java" } ] } [/block] Message consumption from Kafka: [block:code] { "codes": [ { "code": "public class ConsumerGroup {\n\n ...\n\n public ConsumerGroup(String zookeeper, String groupId, String notificationTopic,\n String alertTopic) {\n notificationConsumer = Consumer.createJavaConsumerConnector(\n createConsumerConfig(zookeeper, groupId));\n alertConsumer = Consumer.createJavaConsumerConnector(\n createConsumerConfig(zookeeper, groupId));\n this.notificationTopic = notificationTopic;\n this.alertTopic = alertTopic;\n }\n\n ...\n\n private static ConsumerConfig createConsumerConfig(String zookeeper, String groupId) {\n Properties props = new Properties();\n props.put(\"zookeeper.connect\", zookeeper);\n props.put(\"group.id\", groupId);\n props.put(\"key.deserializer\", StringDeserializer.class.getName());\n props.put(\"value.deserializer\", StringDeserializer.class.getName());\n return new ConsumerConfig(props);\n }\n}\n\n", "language": "java" } ] } [/block]