Posted on

This article will focus on Result Classification to determine the intent of a user. When an action is taken, there is an ocean of potential matches that are possible. Reducing that pool will speed up the matching time, reduce computational load, and improve accuracy.

Large data sets are essential to modern business and play a lot of roles. On the business side, they hold a wealth of data from daily operations, related ancillary data, and potentially vast amounts of information from the individuals within and their customers.

On the other side, they form the basis of AI insight through pre-training deep learning algorithms to identify patterns in your data. You don’t necessarily purchase that vast training library, but you do purchase a distillation of that- a model by which your analysis tools apply to your data to identify patterns.

Recently I have focused on articles around Fitness, and this article will continue that thread. There will be articles in the future on data analysis that can be more broadly applied.

I will choose one example and we will look at it in detail. In Tier 3 of the Connected Gym Series, I talked about recognizing workout moves automatically through use of a wearable device and the accelerometer and gyro data from those devices. A user does a particular workout move, and the watch recognizes that move and catalogs it. We will us a Bicep Curl as an example.

The Infinite Possibilities

Imagining all the moves your body (not just your arm) goes through during the day will boggle the mind. Everything from walking, eating, typing, sleeping, gesturing, working out, etc… amount to hundreds of moves for a given individual. Compound that will all the things they could do (the match set), and it’s in the thousands. How can a watch sort through every move and match it against a vast set of possibilities, and not bog down in the matching process?

Contextual Data Matching
Set of possible Matches- Narrowing down the search

Given this large set of data to match against, we want to identify that the user is now doing a Bicep Curl. Before we get into details, note that you will never exactly replicate the same move twice. Each curl you do will be slightly different than the previous one. No two people will do them the same. So you have a vast number of different, but similar, motions, to detect.

Because there are a large number of similar motions, the motion you are capturing will need to be reduced to a representation, which “fuzzes out” the slight variations to make this a bit easier.

With a representation to search with, you next need to prevent matching against all possibilities to reduce search time. There are a number of clues from the environment of the Connected Gym to make this easier.

Match Set Reduction

Since we are trying to figure out which workout move a user is doing from the watch on their wrist, it’s helpful to see what we are up against. I have come up with what is likely to be an abbreviated list of workouts, indoor and out, that one can do in a gym. There are others, but this is an illustration of the challenge.

List of Workout Moves
List of Workout Moves (not complete)

Just in this list are 172 movements. The easiest way to reduce the matches is by looking at where the user is in the gym. When the Connected Gym system is set up, part of the process with a Tier 3 implementation is mapping the facility and cataloging which equipment is in each area. With each type of equipment will come a preset list of workouts that can be done there.

Weights will have a (rather large) list, but can be broken into barbells, dumbbells and selector machines. Cardio will have a fairly small set of matches, and so forth.

To illustrate how the list above is segmented by area, I’ve created this animated graphic, based on our Beacon Coverage map in Connected Gym: Tier 3. This is a small gym floor plan, so it is lacking a pool, gym, etc…, but you should get the idea.

Match Set Reduction by Area
Match Set Reduction by Area

For the Bicep Curl we mentioned above, you can only do that in the Weight Room in our hypothetical gym. During the motion, the app will see you are in the Barbell, Dumbbell or Selector area (since you can do Bicep Curl in any of those areas) and limit the number of moves it is looking for. While this is still a fairly large list of possible moves, I have seen it demonstrated quite accurately by Focus Motion.

As long as you are doing repetitions, the app can count the reps and then group them by the breaks (resting period). In that way, you get a listing of 3 sets of 15 reps. I don’t particularly advocate publicly displaying the rep count, though. If the user has poor form, the app can miss reps. Missed counts, even bad ones, can aggravate the user. I would rather keep the rep count, as an approximate, private. In that way it can be used to gauge the overall effort.

Form data is also available, and you can tell if the user’s form is not good by the deviation from the “standard” Bicep Curl movement stored in the library. That data is also useful to inform the user they are not getting an optimal workout. Those users are prime candidates for Personal Training.

Because different exercises are expressed quite differently in the 6 axes of motion- X, Y, Z, α, β and γ, you can also reduce the Match Set by those with large changes in X, for example, or large changes in one of the other variables. Deadlifts will have a large vertical component, but the other 5 will be very limited, for example.

Universal Matching

Different areas of the club will have different results. While the user may do many weight lifting moves, and toss in some cardio, some workouts are all homogeneous. Swimming, for example, would be a separate module activated when the user enters the pool are and starts swimming.

While the workout is tagged Swimming, it can be broken into laps and groups of laps, stroke types, and display a number of parameters related to Swimming performance. For such a capability, I would recommend reaching to to Swim.com and integrating their app into the club application.

Group Fitness is also unique. It really doesn’t require recognizing the motion so much as just looking at what room you are in and what time it is. By the time slot, you can query the club’s website and find out which class it is. Thus, you can tag it LesMills Bodypump, for example. While you could break it down into individual elements, I don’t think it is necessary with branded, group fitness content.

Outside the Club

If the user is outside the club, then context depends on where you are. If you are outdoors, running, the device will present a pretty unique signature. The GPS data will indicate a rate of change that will be within the human parameters for running. These constraints will prevent tagging it as cycling- which has a different motion signature, and prevent it from tracking you while driving your car.

At home, you may have some equipment, which is recognizable, as well as bodyweight. While the list of matches may be initially bigger, there are still limitations on what you do at home compared to a gym. The system can learn what you do as well, to make detection easier over time.

Conclusion

As the user moves through the club, the Match Subset will change to match the area they are in. Mixed workouts (like I do) can be segmented and tracked. If the user works out outdoors, at home or in a hotel, likewise the app can make reasonable guesses as to what the user is doing.

By capturing this information, the user gets valuable feedback information to help them improve future workouts. Missed muscle groups can be highlighted and poor form pointed out. Complimentary workouts, say, to complement their running, can be created and suggested for them.

And finally, the club gets a most holistic view of their members. This will allow them to offer customized services to them, as well as optimize the club and offerings for everyone.

IoT

A final note- this is a very narrow use case that is focused on Fitness. I’ve been writing a lot of articles specifically on Fitness for the past few months because it is a passion of mine. However, this applies much more broadly than this scenario. An though we focused on the user wearing a Smart Watch, it can also employ cameras, or both.

Matching a movement to a specific workout is a single application. Matching also applies to a user interacting with the space around them. That space could be their employer’s building, a hospital, a museum, a smart city, etc…

It could also be an autonomous system with arrays of sensors, switches and actuators. The utility of this underpins a wide set of applications in responsive systems. It also underpins a great User Experience.