Could Wearable Technology Make Football Safer?

The NFL’s 2017-18 season came with its fair share of injuries, ranging from the more common — like torn ACLs, rotator cuff issues, and some complications from reparative surgeries. However, one incident stands out from the rest, and that is the head and spinal injury acquired by Steelers linebacker Ryan Shazier during their match with their long-time rivals, the Cincinnati Bengals. The moment Shazier collapsed to the ground clutching his back is one that will haunt all who witnessed it, both in the stands and at home.

Thankfully, the Steelers linebacker is well on the road to recovery, as he is finally home from the hospital and able to stand without assistance. Furthermore, he is determined to support his team in the upcoming season, one way or another.

One benefit to such incidents is that they inspire researchers in the medical and tech fields to seek new methods of tracking and protecting athletes from such debilitating, career-ending — and even life-threatening — injuries.

So far, wearable technology seems to be the most fitting method of achieving this goal. At one time, such devices were simply utilized to track athletes’ steps, sleep patterns, and overall performance, aiding coaches in determining the best methods of improving practice sessions and strategies. Now, however, these sensors can be utilized to measure athletes’ stamina, as well as the implications of the many collisions they encounter during any given game.

In order to discreetly collect this pertinent data, researchers tested the utilization of sensor-filled mouth guards, which included gyroscopes and accelerometers. The gyroscopes were used to track where a given player’s head was in space, offering insight into how players were holding their heads and how those positions correlated with their stamina (e.g., players who held their heads down, thus exposing the crowns of their heads and spine to injury, were often tired). The accelerometers, on the other hand, detected a player’s speed and movement around the field.

These tools were effective in showing researchers not only where players were hit, but how hard they were hit as well. If a given player was hit with a particularly intense blow, the sensors would notify the coaches via Bluetooth, signaling the player ought to be taken off the field and examined.

Although it is difficult to determine just when such devices would be employed on the field, it is encouraging to see that serious injuries are finally being viewed and treated as such, and that players’ health and wellbeing are finally being prioritized by the medical community. Hopefully, these developments will soon encourage coaches and strategists to follow suit, rather than pushing injured players back on the field for the sake of winning a game.

How AI Bots Are Becoming More Human

In the middle of Google Headquarters, something unbelievable — and almost science fiction-like — is occurring. Robots are studying human beings. They are memorizing the ways in which humans interact, perform mundane tasks, eat, drink, sleep, entertain themselves with household and other tech objects — everything we do on a daily basis.

However, this is not a part of an elaborate scheme to take away jobs from human employees, nor a plot to integrate artificial intelligence bots into our society. On the contrary, Google researchers are merely training these artificial intelligence bots to better identify and respond to human actions and interactions.

In the past, Google researchers have made extraordinary breakthroughs in the development and improvement of artificial intelligence. However, every version of every bot has struggled with completing the same tasks: identifying and classifying objects, humans, and the ways in which other humans interact with members of those categories.

With that fact in mind, Google has implemented an intriguing method of educating their bots to more effectively and efficiently identify human interaction — a tactic that entails hours of binge-watching movies.

Over the past several years, researchers have curated a catalog of over 57,000 movie clips — which feature over 96,000 human beings in total — from around the globe. Throughout any given day, these artificial intelligence bots go through the catalog and practice identifying and categorizing different actions and interactions.

Thanks to this method, the bots are getting a clearer picture of human interaction than ever before. This is because human action and interaction is less clear and identifiable in real time than it is in productions, hence why the bots have struggled so greatly to categorize them while observing live subjects.

Now, if this experiment gives Google’s researchers the results they so desire, it may not only improve the bots’ abilities to recognize human action and interaction, but eventually realize the purpose of their actions, as well as the goals humans are trying to reach through said actions. This could eventually aid Google’s bots in targeting potential consumers based on their actions online and in person.

However, it is important to note that such achievements are likely a long way off, as artificial intelligence is still very much in its infancy. However, it will be intriguing to see what other tasks artificial intelligence bots may be able to perform once they are deeply familiar with not only our actions, but our intentions as well.