A few weeks ago, I had the opportunity to participate in the first AEC Hackathon in Dallas, TX. Developed around the idea of gathering people in the AEC community to develop new technologies and innovations to solve industry problems, the AEC Hackathon aligns with some of our goals here in LINE. We were interested, but after seeing a post by Beck’s Virtual Building Group on some of the hardware they and hackathon co-sponsor Rogers-O’Brien were bringing, we knew we had to show up.
A lot of the technologies that were brought are things we’ve at least dabbled in (UAV’s, VR, 3d printing, etc.), but one item of particular interest to us was the Emotiv Epoc+ EEG headset. This bit of kit has been on our radar for a while, but we have not gotten our hands on one yet. The version of Emotiv and software we demoed works entirely through it’s Control Panel and provides access to three suites of data: Expressiv, Associativ, and Cognativ. Expressiv picks up on facial movements and has some fairly straight forward things like Blink, Look Down, and Smile, but also has less obvious parameters like Lower Face Action (which turns out to get things like Laugh, Smile or Smirk). Associativ never seemed to give me consistent data, but gives properties such as Engagement/Boredom, Meditation, and Frustration scores. Cognativ was the suite I was most interested in, as it’s the suite that lets you train a thought to a particular action like Push, Rotate, or Pull and tells you when you’re performing the thought and the power of the thought.
The night the Hackathon kicked off, I tracked down Brian Monwai with Beck to make sure I got to work on any team that coalesced around the Emotiv. With my place on the team both secure and limited to just myself at the time, I was quite happy when the first morning of work started with three of us crowded around the EEG reader ready to make something awesome. By mid day the others splintered off into other groups out of frustration, and I was left alone but was eventually able to get something close to a reliable connection to the device. Once I was able to read the data and feeling things were beginning to fall into place, I set about trying to connect directly to a custom Grasshopper node but spent the rest of the day failing at doing that. A mixture of my impatience, not reading the documentation, mixing 32bit and 64bit libraries, and false assumptions left me at the end of day 1 with nothing to show for all the work I’d put into it.
Even with the setback,s I was single minded about getting the Emotiv to work as I envisioned it, so by the start of the second day, I went back to my fallback plan of writing data from the headset to a text file in XML format, and then reading and parsing the XML in Grasshopper. This ended up working reasonably well and by mid morning, I had a reliable connection working as such, and just need to worry about errors when both the file writer and the file reading components tried to access the file at the same time. With my success there and in the spirit of being cross platform I decided I should work on a Dynamo plugin using their Zero Touch Interface. Technically the Dynamo plugin worked, though having not made many attempts at writing a Dynamo node (and none of the previous ones very successful) I was left with a working but awkward node. But hey, this was a hackathon.
By noon the sensors were beginning to dry out, and I had to make sure everything was working for the presentation I needed to give. It would occasionally work but was hardly reliable so when it came time for the presentations, and knowing I was the second team to present, I stood back in a corner with the software running and hurriedly reapplied the saline solution to the sensors and tried to make sure everything kept working by will power alone. I was able to present what we had at that point, only implementing the Expressiv Suite and plugging them into a twitchy cube in Grasshopper. I focused on the Expressiv Suite because it was pretty straight forward if it worked or not based on how close the animated robot mimicked my actions and I didn’t have time to train myself on the Cognativ actions after getting finally it to work semi-reliably.
With the Hackathon complete, I uploaded the code to GitHub and began making plans to convince whoever I could find with the company credit card to purchase one; however luck would have it that Brian allowed me to borrow their Emotiv headset for a few weeks to continue fine tuning what we started and I didn’t have to spend Monday morning begging for technology and coming up with good reasons why an architecture firm needed to purchase an EEG reader. So with this kind offering I was able to spend some time over the last couple of weeks refining (read completely rewrite from scratch) my code. Because of the issues I’d run into with two applications trying to access the same file, and my desire to record and potentially play back a data stream from the headset, I opted to write the data to a SQL database. Since I didn’t already have something like MS SQL Server or MySQL installed, I thought I’d go lightweight with SQLite, storing the database in a file that can be easily transported. I was hoping to use Nathan Miller’s Slingshot! plugins for Grasshopper and Dynamo to make things easier on myself, but while I could find screenshots of Slingshot! with SQLite capabilities I failed to find a version that actually had it (I’m probably just being impatient again). So I went ahead with my original plan and wrote a custom node for Dynamo and Grasshopper to read from these very specific databases (read not nearly as flexible as Nathan’s SQL plugins). Now if you look to the GitHub page for the project you’ll see three projects built into it: LINE.Emotiv.Connect that connects to the headset/control panel and writes to the SQLite database file, LINE.Emotiv.Dynamo that reads the database and outputs the most recent values to Dynamo, and LINE.Emotiv.GH that does the same but for Grasshopper.
Now all that’s left is to train my mind, and become one with the algorithm.