Apple killed the check-in @WWDC. Did anyone notice?

Glenn Gruber | July 8, 2014 | Mobile Strategy

OK, I know you’re saying, “The check-in is already dead! Foursquare killed it long ago” (by which I mean, they at least pushed it into the Swarm app).

swarmBut while the check-in for the sake of checking-in may no longer be a “thing,” there are any number of reasons why one would still want to identify locations one has been. And with the latest changes to Core Location services, Apple has automated the process of determining when you’ve arrived at a destination.

Introducing Visit Monitoring

The existing CoreLocation APIs are good for certain things, like tracking a person’s location. However, they oftentimes come at a high battery-consumption cost and are more aligned to navigation and predetermined landmarks.

Visit Monitoring is a completely different take on location. As they asserted at WWDC, it is less about getting from Point A to Point B, but more about A and B.

Visit Monitoring uses an algorithm to monitor when you reach a destination. “Destination” being a place that has been deemed important primarily because you’ve previously spent time there.

Similar to Region Monitoring, Visit Monitoring can launch an app when it detects you’ve reached or left a particular destination. But it’s a bit better than Region Monitoring (including beacon regions) because in order to invoke the RM API, the app first needs to know the regions you want to monitor. For some apps this list is either unknowable or overwhelming. But if you do know what regions you want to monitor, from a power consumption perspective, Region Monitoring is “cheaper” than Visit Monitoring.

Visit Monitoring, as stated before, is driven by an algorithm based on how often you visit a particular location, how long you stay, and other factors. Here’s an example of how a visit is tracked by the new location service.
anatomyofavisit
As you can see, Visit Monitoring only logs an actual “arrival” if the user’s location remains unchanged for a certain period of time. This ensures we’re not recording people waiting at crosswalks for a light to change as a “destination.” Even better, the service ‘looks backward in time’ at location to estimate when the actual arrival took place. A similar process happens when determining departure.

So once an “arrival” is logged, the system then determines physical location and closest specific venue (e.g. a restaurant), thus automating the check-in process. Eliminate a step, eliminate the friction. All good. Of course any number of actions could be triggered once a visit is detected. The acquisition of Spotsetter is making a lot more sense now, eh?

Taken together with iBeacons, Apple has handed developers a robust set of tools to build contextually-driven, location-driven applications.

Nearby Not Nearly So Good

According to AndroidPolice, Google is working on a service that seems to be a mix of iBeacons and Visit Monitoring, called “Nearby.” Born out of the acquisitions of Bump (remember Bump?) and SlickLogin, Nearby will let you “connect, share, and do more with people, places, and things near you.” Sounds like a great extension to Google Now, right?

But here’s the Google rub (from AndroidPolice’s reprint of Nearby’s onboarding screen):

“When Nearby is turned on for your account, Google can periodically turn on the mic, Wi-Fi, Bluetooth, and similar features on all your current and future devices. Google+ and other Google services need this access to help you connect, share, and more.

When you turn on Nearby, you’re also turning on Location History for your account and Location Reporting for this device. Google needs these services to periodically store your location data for use by Nearby, other Google services, and more.”

Gaaaa!!!! Google will turn on the mic, WiFi, Bluetooth and other sensors “periodically”? Without notifying you either when or how often it does. Why would they? They don’t even have a LED indicator on Google Glass to tell others when you’re filming them. I mean, Holy Cats! The microphone? Why not the camera (front AND rear-facing of course)?

Now Google says they will provide users the ability to set preferences on this feature, regarding who and what they are made visible to. Further they only the other device will be informed of your proximity to it (or rather the him or her holding the device). BUT all that information goes through…wait for it…Google’s servers, meaning they will collect all that information.

And if experience with Google means anything, there won’t be any limitations or preferences—short of disabling the feature—regarding what Google will do with your information, though we can be sure they will do something that maximizes ad revenue.

Glenn Gruber

Glenn Gruber is a Sr. Mobile Strategist at Propelics. He leads enterprise mobile strategy engagements to help companies determine the best way to integrate mobile into their business -- both from a consumer-facing perspective, but also how to leverage mobile to empower employees to be more productive and improve service delivery through the intelligent use of mobile devices and contextual intelligence. Glenn has helped a wide range of enterprises on how to leverage mobile within their business including Bank of Montreal, Dubai Airports, Carnival Cruise Line and Merck. He is a leading voice in the travel sector as a contributing Node to Tnooz where he writes about how mobile and other emerging technologies are impacting the travel sector and a frequent speaker at industry events.

More Posts

Follow Me:
TwitterLinkedIn