A Comparison of Location Search UI Patterns on Mobile Devices

Author
Sebastian Meier, Frank Heidmann, Andreas Thom
Year
2014
Publication
Poster published at the Mobile HCI conference 2014
Download
PDF

Abstract

This poster is looking at how users utilize mobile applications that offer an interface for finding locations and how the way of interaction changes depending on the users’ intent. Through the analysis of existing interfaces we identified 5 location search patterns. In a further evaluation of the existing patterns we tried to identify which patterns serve which users’ demand for information. In a goal directed pilot study we were able to gain a first insight into the correlations of specific user requirements and location search patterns.

Author Keywords
Interface; Human Factors; Mobile; Design; Patterns

ACM Classification Keywords
H.1.2 Human Factors; H.5.2 User Interfaces

General Terms
Human Factors; Design

Introduction

Since the introduction of Apple’s AppStore, which was soon accompanied by Google’s PlayStore and other mobile app stores, navigational and location search applications (apps) have been an essential part of the apps offered. Those are apps that take the user’s current position into account and use this data for example to help users find locations. They can be categorized as Location Based Applications, or more general reactive Location Based Services (Küpper, 2005). When we use the term location in this poster we mean a physical location like a shop or restaurant. When we are talking about a geo-referenced position, like an address we name it geo-location, place or position.

Related Work

Even though this type of interaction: finding locations via mobile devices, plays an important role in user requirements today, as a current study underlines (Gutierrez, 2013), we don’t know much about how people use the variety of interfaces available within those apps. Most of the existing research in this field is focusing on the technological part of the process. For example on the optimization of location sensitive queries (Bouidghaghen, Tamine, & Boughanem, 2011) or the prediction of clicks or rather choices (Lymberopoulos, Zhao, König, Berberich, & Liu, 2011). Many of these projects are overlapping with research in the field of context aware applications. The research stronger related to HCI in this area is mostly looking at efficiency/clicks (Liu, Rau, Gao, 2010) or at specific interface elements (maps, lists, categories) (Iwata et al., 2010), but there is no research looking at the differences of the interfaces used in location search apps. To find out more about how users interact with those kinds of app interfaces we conducted a series of experiments.

Phase I – Extraction of Existing Patterns

In order to compare the existing interface elements we used the pattern methodology that is widely used in the HCI community (Borchers, 2000; Dearden & Finlay, 2006; Folmer, Welie, & Bosch, 2006; Granlund, Lafrenière, & Carr, 2001) to document and categorize those elements. For this purpose we looked at a large set of mobile apps from Apple’s and Google’s app store. Even though the selected apps serve various purposes e.g. social networking, navigation, location searching, to name a few, we were able to identify similar interface patterns throughout the whole set of apps. Since the number of patterns didn’t increase even though we increased the number of examined apps, we will only discuss a subset of 8 apps (table 1).
We were able to extract 5 patterns from the analyzed apps. The five patterns search slot, categories, result-list, map and filters are not only used as standalone interface elements but also in combination, Google for example is using a combination of the search slot and filters.

5 ui patterns (Fig.1 – 6)
5 ui patterns (Fig.1 – 6)

For the apps overview (table 1) we further categorized the used elements in first level and second level elements. First level interface elements are elements that are available for interaction when opening the app or when switching to the search screen. All elements that appear later or need further interaction to appear are second level elements. Some of the apps offer direct feedback via the result-list while changing the search parameters, other require a submit action to update the results. As our focus was lying on the search patterns, we only clustered the result-views in list- and map-patterns. The reason why we saw need for looking into the result-views in addition to the search patterns were the map-views. As we state later, the map view is a mixture of a result-view and a search element at the same time.

App Purpose iOS / Android First Level UI Second Level UI Result UI
Foursquare Social Network, Location Finding yes/yes Map, Search Category Map, List
Yelp/Qype Location Finding yes/yes Category, Search List Map, List
Around Me Location Finding yes/yes Category, Search List Map, List
CityInfo Location Finding, City Tour yes/no List, Search Filter Map, List
Urbany Location Finding yes/no Category, Search List Map, List
Google Maps Navigation, Location Finding yes/yes Map, Search Category Map, List
Google Local Location Finding yes/yes Category, Search List Map, List
Facebook Social Network, Location Finding yes/yes Search List Map, List
Table 1. Selected Applications and their main purpose as well as the first and second level interface patterns

Search Slot

The most common search interface, the search slot, prominently used by Google, is the interface element that can be found in almost every location search app. The search slot is used to enter a search term (e.g. fast food, breakfast) in order to look up a specific location or look up a place. Some apps like the Google app are using one search slot for both location and place terms. Other apps differentiate between a slot for location search terms and one for place terms. While some apps just provide an enter- and submit-functionality, other apps have auto-complete features or direct feedback results implemented. Direct feedback means that while the user is typing, the results for the incomplete search term start appearing, for example as a list.

Categories

Categories are meta-data terms under which locations are grouped (e.g. restaurants, parks, etc.). Categories can be used in an exclusive or inclusive way. The interface elements for categories are usually used in two ways. On the one hand there are apps that use categories as a first level search interface to reveal locations from a specific category (e.g. AroundMe), on the other hand there are apps that use categories as filters to narrow down a search query from a search slot. Categories were found as vertical or horizontal lists (Fig. 2 & 3). They were visualized as icons, text, or a combination of both. Beside the lists, dropdowns containing the categories were also found. Categories could be subsidized under filters, as they present a predefined list of a meta data values. We have created a distinguished pattern for categories, since in contrast to the other filters, categories appear as a first level UI element in many apps.

Map

While some apps offer maps only as a result-view, other apps also offer maps as a search interface (Fig. 4). Most popular foursquare is using the map as a first level UI element, presenting a visualization of the locations available nearby. Thereby, the map serves as an interface element that can change the area on which the search query is executed. By panning the map and through that changing the boundary-box of the area in focus, the search query is modified. Beside the manipulation of the search query the map can be used to select results.

Filters

Filters are second-level search interface elements. We use this term to summarize all filter interfaces apart from categories, as they are also used as first level UI elements. Filters are meta-data elements with the purpose of refining a search query. Depending on the app those filters can be ratings, opening hours, sub-categories or price, to name a few. The modification of the search query by using a filter is, depending on the app, exclusive or inclusive. Even though we will not discuss every possible meta data value that can be presented as a filter, we want to point out that there are two classes of filters we identified. On the one hand we have parameter filters, those are independent from each other. On the other hand we have sub-filters that belong to a parent filter. The simplest examples are sub-categories. The actual interface elements representing the filter vary throughout the apps, from sliders (Fig. 5) to dropdowns or checkbox-like-behavior. The full range of possibilities will not be discussed in this poster.

Phase II : Card Sorting

We started our experimental phase with a series of initial questionnaires with 14 tourists and locals, those gave us brief insights into what could be important in the process of finding locations. In order to construct a more solid hypothesis before we would be able to plan our study we conducted a cardsorting experiment with 10 participants. To this end we analyzed the patterns and translated them into search parameters, which could then be ranked according to the importance of finding a location. The parameters were distance, rating, opening times, price-range, accessibility and weather. Additionally, participants were asked to add parameters. Added parameters were distance to the next public transport and subcategories. Distance was ranked most important (table 2) followed by the information if the location is open and the rating.

Hypothesis

Combining the patterns we have identified in phase I, we formulate the hypothesis that for location search tasks that are conducted on an area nearby, users will more often use a map as a filter and search interface while in cases when the location is further away, users will more likely use the search slot.

Phase III : Experiment

For the experiment we developed a web-app that was connected to the Yelp/Qype API for data access. The subjects were given two tasks. The first task was to find a fastfood restaurant nearby their current position. The second task was finding a café for breakfast in a specific area further away. The app would indicate the completion of the task as soon as the participant would reach a location detail page holding the data of a location that would fit the task criteria. Every task started on a page consisting of a search slot, a map and a list of categories. From this starting point the user was just given the task, no further introduction.

Fig. 7: Location search app
Fig. 7: Location search app
Parameter Mean Median
Distance 1,75 1
Now Open 2,25 2
Rating 3,375 3
Price-range 4,125 4
Opening-Hours 4,125 4
Weather 6 6
Accessible 6,375 6,5
Table 2. Card sorting results (Users ranked importance from 1 = important to 7 = unimportant).

 

Overall Setup 1 Setup 2 Setup 3 Setup 4 Setup 5 Setup 6
Task 1 Entry point
Search slot 2 1 0 1 0 0 0
Map 5 1 2 0 0 1 1
Category 13 3 1 1 3 2 3
Task 1 Problem solving entry
Search slot 5 1 0 1 0 1 2
Map 4 1 2 0 0 1 0
Category 11 3 1 1 3 1 2
Task 2 Entry point
Search slot 14 5 3 1 2 1 2
Map 3 0 0 1 1 1 0
Category 3 0 0 0 0 1 2
Task 2 Problem solving entry
Search slot 17 5 3 1 2 3 3
Map 0 0 0 0 0 0 0
Category 3 0 0 1 1 0 1
Table 3. Selected Applications and their main purpose as well as the first and second level interface patterns

To interfere with the possibility that users might just click on the UI element that comes first we made sure that throughout the experiment the elements were equally distributed vertically. This means that all possible combinations (see Fig. 7) were shown to the user. Our main focus was lying on the entry point for each task as well as on how the user would complete the task. The interaction happening in-between was logged and analyzed but will not be discussed in this poster. For logging we observed our participants as well as using logging technology on the phone.

DISCUSSION

We tested more than 25 participants. After examining the data we had to dismiss 5 sets and came down to 20 valid datasets. As we can see in the results (table 3) the entry point for the first task was in 65% of the cases the categories. In regards to the map our hypothesis has proven wrong, we have to change it to categories: For searches on locations nearby, participants use the category interface. For the second task, the entry point is the search slot. In regards to solving the second task our second hypothesis is correct. An interesting effect that also needs more detailed observation is the fact that those participants who chose the map as an entry point for solving the second task still solved the task through the search slot in the end.

Conclusion

The study discussed above was only conducted on a small sample group, for further research this needs to be applied to a bigger sample. We only looked at two types of tasks, finding locations nearby and locations far away, in the future we would like to expand the testing on more diverse tasks. To further target the contextual and mobility factor of mobile devices, the experiment design should also be extended to include in-situ settings. The search for locations through mobile devices is a thriving field in the area of mobile applications. We were able to identify 5 UI patterns used in apps serving this purpose. Furthermore, we conducted a pilot study revealing that categories and the search slot are important first level UI elements when conducting a search and that the map as a first level UI, in our case, doesn’t lead to a successful conversion. The results from our experiments raise the question why maps are performing worse than the other two UI elements, or rather: what are the perceptual and cognitive differences between those two elements in regards to location search? The research conducted in this paper should provide other researchers with a starting point for looking into the behavior of users utilizing mobile apps for location search.

References

[1]
Borchers, J. O. (2000). A pattern approach to interaction design (pp. 369–378). Presented at the the conference, New York, New York, USA: ACM Press.
[2]
Bouidghaghen, O., Tamine, L., & Boughanem, M. (2011). Personalizing Mobile Web Search for Location Sensitive Queries. Audio, Transactions of the IRE Professional Group on, 1, 110–118.
[3]
Dearden, A., & Finlay, J. (2006). Pattern languages in HCI: A critical review. Human-Computer Interaction, 21(1), 49–102.
[4]
Folmer, E., Welie, M. V., & Bosch, J. (2006). Bridging patterns: An approach to bridge gaps between SE and HCI. Information and Software Technology, 48(2), 69–89.
[5]
Granlund, A., Lafrenière, D., & Carr, D. A. (2001). A pattern-supported approach to the user interface design process. Presented at the Proceedings of HCI International.
[6]
Gutierrez, C. (2013). 6th Annual 15miles/Neustar Localeze Local Search Usage Study Conducted by comScore, 1–18.
[7]
Iwata, M., Hara, T., Shimatani, K., Mashita, T., Kiyokawa, K., Nishio, S., & Takemura, H. (2010). A Location-based Content Search System Considering Situations of Mobile Users. Procedia Computer Science, 5, 426–433.
[8]
Küpper, A. (2005). Location-Based Services. John Wiley & Sons.
[9]
Liu, Rau, Gao. (2010). Mobile information search for location-based information. Computers in Industry, 61(4), 8–8.
[10]
Lymberopoulos, D., Zhao, P., König, C., Berberich, K., & Liu, J. (2011). Location-aware click prediction in mobile local search (pp. 413–422). Presented at the CIKM ’11: Proceedings of the 20th ACM international conference on Information and knowledge management, New York, New York, USA.