Smart-Gaze: AI powered coding for effective Eye Tracking

Blogs

“What you see is what you purchase”

It is thought that 95% of the purchase decisions occur in the subconscious (let’s call it the reptile brain). The choices taken by the reptile brain are highly affected by what we see. When walking throughout a supermarket aisle, we see hundreds of packs of various items. But then, there are some products that capture our attention. When we see such items, the reptile brain kicks into action, it fixates its attention on that product packaging that looks fascinating (” ooh glossy!!!”) and prior to we understand it– we go into the process of seriously considering to purchase that item.

Eye-tracking is a well understood tool to implicitly measure how individuals react to different item packaging, advertisement copies, web-page designs, banner placements and more. Performing eye-tracking research studies have the potential to generate significant RoI on item packaging, advertisement creatives and positioning decisions. We (and the clients we speak to) believe that the market is only beginning to scratch the surface area when it comes to utilizing this method for producing insights. The factors cited include that it is– pricey, slow, not objective (as participants get affected by the process). We believe that the crucial factor is that the technology is not that developed.

With this whitepaper, we intend to demonstrate how embedding of Artificial Intelligence (AI) into the present eye tracking technology can make the procedure of eye tracking analysis much more effective, quicker and cost-effective. Note that we presume that the reader is familiar with the eye tracking process for physical environments.

Eye-Tracking at a Glance

There are 2 primary methodologies related to eye tracking: Physical and Online. In Physical eye-tracking, research study individuals use eye tracking-hardware (glasses) and stroll around a physical area (typically a retail-shelf) where the real copies of the research study subject are kept. In a virtual environment, participants look at the research study subject (generally a virtual shelf, website layout, video advertisement) through a computer system screen and a webcam is used to track the look movements.

Source — EyeGaze, Threesixtyone

While virtual eye-tracking is quicker and more affordable, it has drawbacks of being less precise (webcam is not as accurate as eye tracking hardware) and not being close to truth (a virtual rack is really various from what it would look in real life). In this whitepaper, we are going to specifically focus our attention to Physical Eye Tracking for the purpose of checking the visual appeal of item packages on retail racks.

Coding in Physical Eye Tracking

The key restraint with eye tracking technology is that it can tell you ‘ where’ the customer is looking, but not ‘what’ the customer is looking at. The hardware figures out the instructions and place where the gaze of an individual is fixated. However it has no knowledge about exactly what the person is seeing. It is blind to whether the individual is taking a look at a cost tag, Red Bull can, Gatorade can, her cellphone etc.

Manual Coding has many obstacles

The current coding options in the market for physical eye tracking do not provide fully automated coding of look videos. They are basically an annotation/tagging software created to make manual coding more effective. Manual coding develops following obstacles in a typical physical eye tracking project:

  • It is manual labour intensive and not scalable across a large number of videos
  • It’s a slow process which ends up making the turnaround time of an entire research exercise high (which is big roadblock in today’s fast-paced business environment)
  • Manual coding can also lead to human errors affecting accuracy of the final analysis
  • Logistics of managing a manual coding pipeline is challenging
  • Sometimes researchers influence the behaviour of the respondents in order to make coding easier — this takes the exercise away from objectivity

Smart-Gaze– AI Solution for Eye Tracking Coding

Smart-Gaze uses Deep Neural Networks based architecture to analyse raw look videos. Through some training, the algorithm comprehends what the essential locations of interests (AOIs) look like and once that it is done, it does the coding automatically. And this is done with accuracy that is comparable to what a human coder would accomplish.

Smart Gaze makes coding for physical eye tracking jobs far more reliable due to the following benefits:

  • It is very fast v/s human coding leading to faster insights and more RoI.
  • Much more cost effective compared to human coding.
  • Highly scalable due to automation.
  • Removes possibility of human errors due to fatigue or boredom.
  • No need to worry about coding logistics — send the raw data and get the coded data in return.
  • No need to influence research participant behaviour as coding is no longer a worry

How Does It Work?

At the minute, Smart-Gaze is not a self-serving item that a scientist can log into and use. At the minute it is a service where the researcher catches raw look videos from eye tracking glasses (any brand works, no constraints here), briefs us about the crucial Areas of Interests and then we take over. Within a duration of 3 days, our team at Karna AI trains an AI algorithm and codes the information according to the client’s need.

The accuracy can be examined through a coding visualisation tool. Some other essential benefits of this model consist of:

  • There is no constraint of eye-tracking hardware brand (our system works for any type of glasses)
  • There is no need for up-front investment in a eye-tracking software license
  • This is a pay-as-you-go model where the client is charged on a per video basis
  • The process and nature of output are completely customizable.

KARNA AI

Please follow and like us: