Embodied HCI: Testing the Myo Gesture Control Armband

I have quite a neat field of study, I have to say. At the same time I can contemplate matters of the digital, but also try out the latest tech that’s not even officially out there for consumers. Like this cool little thing called Myo, that hopes to be part of “building the future of human-computer interaction”.

Myo is a device that you put around your hand and use to control whatever on a computer. The device recognises arm position and different gestures you perform with your hand (swipes, tap with fingers etc.).

Some current applications I’ve seen advertised are using it for presentations (flipping slides and stuff, yeah I know, rather boring perhaps), some games, and controlling tangible real life objects such as robots.

The installation of the device on Mac was quite fast, and syncing gestures worked quite well. In the initial tryout we also downloaded a Myo computer cursor integration. My first impression is “cool, but needs some further thinking perhaps”. The cursor moves quite well while moving the arm, but if you use certain gestures for example as a right click of a mouse, it also moves the cursor, which makes you miss your target. Perhaps assigning mouse clicks to a different gesture repairs this, but as said, I just tried it quickly today.

Connections to my study in HCI

In my own study I’m currently looking into embodied accounts of human-computer interaction (HCI). HCI has been troubled by a view of a disembodied mind (i.e. the user). A description of more like a disembodied information crunch machine than an actual human being who experiences surroundings more broadly through different parts of the body and using them to interact with various objects.

Some studies I’ve recently read, like the one in game studies by Farrow and Iacovides (2012), say that we should try to invent more coherent ways to understand embodiment in the digital or virtual environments, and that this could lead to better understanding of HCI. Still in my opinion, mouse and keyboard and game controllers can similarly be considered as an embodied experience, as the controls become transparent and allow (if things go right) us to interact with objects on the screen. Think about for example a simple example of a computer cursor: When I want to move it, I do not think about the device (mouse) that mediates the movement. I am moving the cursor.

There currently seems to be a bit of a confusion about new full-body-gesture-whatever controllers and if they are somehow more immersive. Farrow and Iacovides (2012) actually criticized this as a wrong assertion: experienced gamers feel very strong engagement with the activity on the screen just with a regular game controller. Simply put: as a controller used with hands, it is still embodied interaction. Using the body, gestures or voice recognition to control games actually show mixed feelings for example with Skyrim and Mass Effect. Interestingly voice commands might even distance oneself from the game. Well think about it, isn’t a bit uncanny to command yourself by saying “Marko, do this and this”?

Recently Sarah Zhang in Gizmodo Australia criticized that VR does not allow us to feel similar kinds of movements what we see on the screen. There are some good points in her thinking, but I think generally some of her ideas are slightly misleading. If movements themselves were the issue, I couldn’t play video games with a monitor or a TV screen either. Still, I do not get nausea or “cybersickness” with screens. I am also very much immersed, engaged or involved (whatever these definitions fit for you) with the game. So the question is, if the problem actually lies with the stereoscopic display technology, instead of the activity and its correspondence to our body.

Anyways, interesting times to examine what (embodied) human-computer interaction means nowadays, and how should we talk about it.