Hartford Hackster.io Edison Hackathon
Intel Edison Virtual Reality
This weekend I developed a project (github source here) as part of the Hartford Hackster.io June 25th, 2016 Hackathon. You can view projects created by other participants here. Intel and Seeed provided Intel Edison and Grove Starter kits to all participants. This project demonstrates the use of the Edison as a sensor gateway, connecting to AWS IOT service for use by a client utilizing Google Cardboard VR glasses.
The Edison takes sensor readings which are then published to a topic bound to AWS IOT. This service in turn takes all sensor readings received and, through the rule engine, publishes them onto a queue (SQS). For the web app, the ThreeJS library provides the graphics and stereoscopic view needed for the Cardboard glasses. The client is using the AWS SDK for JavaScript in the Browser to poll the queue to get sensor readings, which are used to affect how fast the "strobe" is spinning in the scene. You can view the client in a web browser on your phone, placed inside the Cardboard.
This project was an exercise to learn more about ThreeJS, Virtual Reality, and how the real, physical world can be used as inputs to a constructed, virtual world.
Some Findings
- Initially I was using the AWS IOT rule engine to route all messages received to DynamoDB, using the
${timestamp()}
'wildcard' as the hash key to keep all entries unique. However, Amazon Web Services DynamoDB does not provide a way to query the last element added, so I ran into issues when trying to poll the data from the web application (which is using the data to affect the VR world). Unfortunately, DynamoDB is currently the only database that the IOT rule engine supports, otherwise I likely could have gone with RDS (Relational Database Service). I also considered using S3 (Simple Storage Service), but each message would end up in the S3 bucket as an individual JSON file, making querying and pulling the data difficult. Another alternative would have been setting up DynamoDB 'triggers' using the Lambda service to respond to database changes, but this still felt kind of hacky. Because my data did not need to be persisted, Simple Queue Service (SQS) provided a viable alternative, and that was what I ended up going with. - SQS is not time-ordered. I'm not sure if any queueing systems are time-ordered, but I found out that due to the way SQS is distributed across AWS servers, getting your message perfectly in order is not possible. For my purposes, the sequencing was close enough.
- SQS has a purge limit of 60 seconds, and because I was reading from the queue every half second, I was not able to immediately delete the message after reading it. If I stick with SQS, an option might be to set the message retention period to match how often I'm reading the queue, although given some latency at various points in my system, it might be better to set the retention period to twice that of the read frequency.
- Because I did not need to do anything server-side with the messages stored in SQS, I chose to poll the queue directly from the client code. You can use the 'AWS SDK for JavaScript in the Broswer' for this. If you only have unauthenticated users accessing the application, the code to authenticate the application to AWS is as simple as below: AWS.config.region = 'us-east-1'; AWS.config.credentials = new AWS.CognitoIdentityCredentials({ IdentityPoolId: 'YOUR_ID_HERE', });
- AWS Identity and Access Management can be pretty confusing. In order to setup the app-level authentication, you have to go to the 'Cognito' service, and create a new federated identity. Then use the pool id from there. The service is nice enough to give you the code to drop in.
Future State
AWS is supremely powerful, but as I improve my project, I'd like to try using a different MQTT client for the publishing and subscribing functionality and potentially remove AWS from the equation altogether. Because I would be subscribing to the topic from the web app, I would have to find a MQTT client that can subscribe from a browser. Going with this approach would limit me from the functionality and services AWS provides, but it may be a cleaner approach for the use case of this project.