Unearth
Unearth Mobile App
Overview
Trying to map what you cannot see is challenging, especially when any inaccuracies can lead to major property and personal damage in the form of gas line cross bore explosions. Utilities employ inspectors to map the water and sewer lines across the country to gather vital information about the buried infrastructure below our feet. Often equipped with a variety of disconnected digital and analog tools and working through inefficient processes creates a cumbersome workflow with a lot of room for human error.
Results
A mobile app that allows technicians to capture data accurately and easily in the field, minimizes duplicative work and human error across teams, and is contextual to their workflows.
Role
Discovery and ideation, information architecture and ux exploration, usability testing and iteration, ui refinement and delivery to engineering for implementation.
Utility crew members inspect the physical assets of the utility infrastructure. They capture the data points that describe these assets and communicate their findings back to the utility’s back office. The tools they use to get their jobs done varies widely across companies and teams, but one consistency across all of them is the lack of one tool that can be used in the field that makes it easy to capture geospatial data points as accurately as possible.
After learning more about some of our user’s tools, workflows, and pain points we focused in on three considerations where a mobile app could improve their jobs:
Accuracy is key—Empowering field users to be the source of truth. How might we allow users to capture the information they see with such trusted accuracy that the need for second-guessing and rework is no longer necessary?
Minimize human error—Users have multiple tools they currently use to capture and communicate data throughout their process, creating a scattered error-prone workflow. How might we build a single tool for them to use to lessen their dependencies on the multitude of tools they currently have to carry?
Easy to finish what you start—Should be a simple and fluid flow, allowing users to fully complete one step before moving on to the next. How might we create a tool that is contextual to their changing situations making it more intuitive and natural to enter data as it is captured?
How do we make it easy to draw accurately on a phone?
One of the biggest challenges out of the gate for us was to figure out how we could allow our users the ability to capture the location of an asset in the physical world as accurately as possible. We have to take our user’s working conditions into high consideration.
Tapping on the screen to place a point would be quick, but not accurate. Snapping a point to some existing geometry would be quick and easy, but that’d cost more to implement. Plus, it wouldn’t work if there is no geometry on the map for our users to reference. The idea of using a reticle to zoom in and show a portion of the map the user was holding their finger on had some momentum behind it, but the standard treatment of offsetting the reticule created confusion with users. But, still, there was something to that idea. I explore other types of apps that used reticules in different ways:
I then realized, “why not use the full viewport of the device as the reticule?” Doing this would put users into a “drawing mode” that focuses on that one task. It allows them to navigate the map naturally; as they would while browsing the map. Plus, they would have more control, which allows for much more accuracy.
I started by sketching out what that flow might look like on a phone and sharing these ideas with the engineering team. I wanted to make sure this path I was going down was going to be feasible for them. Sketching this out in low-fidelity allowed me to get the ideas out quickly and to share them with the team for feedback. Plus, this was an opportunity to collaborate with the engineers and bounce ideas off of each other.
The first flow followed the experience of placing a single point on the map. With the full viewport being the reticule there would be a crosshair in the center of the screen that stays stationary in the UI while the map could be panned below. The user could pan and zoom in and out of the map as they would any other map. When they had the crosshairs placed where they wanted to place the point they would tap the “Add Point” button in the bottom right corner of the screen. After that was done they could tap “Done” to complete the drawing flow and save the data. Or, they could tap on the point itself which would display “Edit” and “Delete” options if they needed those functions.
After getting some feedback and bye-in from the engineering team I got a good sense that this would not only be feasible for the engineers to build, but it’d also give our users the accuracy and the ease-of-use they are looking for. But, our users need to draw more than just single points. Our users also have to capture data that would be drawn as polylines and polygons on their maps. Would this pattern scale to support both of those flow? To find out I went back to my sketches and started building on the foundation I had already created.
One of the big differences that became apparent when drawing a string of points together is the concept of your shape having two end points. This may seem like a simple enough concept, but how would we make this experience feel simple for the user? Building off of the allowance in the single point flow that allows the user to tap drawn points, we are able to surface pertinent actions and functions to the user that they may need. In the case above, when the user taps on the opposite end point of the shape they are drawing they have the options to edit and delete (like before) but also the option to continue drawing from that end point.
Along with the ability to select drawn points, it became apparent that the ability to add points along the shape would be helpful and give our users much more flexibility and control.
As I continued to sketch out the flow of more complex geometries, I could see more complex functionality would be needed. Above is an example of adding onto the end point options of the polyline flow by adding a “close shape” button for the polygon flow. With this basic pattern proving that it could be scaled into more complex flows of polylines and polygons, it was time to put it to the test.
I built a higher fidelity version of the drawing flow into a tappable prototype (below) and built out a usability test to go with it. At the time, I didn’t have direct access to our users, so I did the next best thing; I tested with people I could find. I recruited people from around the office who hadn’t had any insight into what I was designing had them run through the usability test. The feedback from these tests was very positive. And with only minor tweaks needed to the UI, these explorations and tests become the foundation of our mobiles app’s drawing flow. This gave our users and easy and accurate tool to place new shapes on the map in a way they hadn’t been able to before.
Establishing our future vision
As we continued to build on the app we were building features on top of features without taking the opportunity to improve the overall experience. We were starting to run out of space to incorporate access points to these new features. Engineers weren’t utilizing established patterns from previous builds. Product Managers weren’t seeing this as a pressing issue, so it never became as much of a priority as it needed to be. How will we scale our app as we continue to grow and add more data and functionality to the experience?
I took on the initiative to create a better architecture and UI system to support this update. I created uniform UI views that could be utilized across all contexts within the app. Allows the content to be king within the context of the user and establishes UI familiarity across the app. One goal was for the UI to be built to work within the physical context of users—you can control pretty much everything in the app with one hand. This would also simplify engineering and development efforts so the Product team can be more nimble to customer needs. Also, this solution has to work throughout the entire app to be most productive and efficient.
I began to explore what a more unified UI that more efficiently uses patterns could look like across different views of our app. I used key touchpoints across the app to see how the components that contain the content could be unified. I then sketched out a flow pulling all of the major screens together.
Two of the most prominent patterns that emerged from this exercise were the toolkit and the “content container”. The toolkit was originally a standalone feature with its own architecture and pattern. Its function was to allow users to access different asset types that they could then place on the map. It lived at the bottom of the screen and functioned similarly to a text keyboard. It worked well for a while, but it didn’t scale very well when we wanted to add more to the app.
The next iteration simplified the access point to toolkit into a single button that brought up the full toolkit for the user. This was kept in an easy to access location on the screen since this was often access by users.
This design also started to bring in the new “content container”, with is a high-level container meant to both house the current content the user is focused on at any point in their flow as well as have a consistent interface and interaction behaviors. This pattern also allowed users to toggle their view between a list/table view and a map view easily and at any point within the app, giving them easy access to how they want to see their data.
With plans for more features and functionality looming on the roadmap, more explorations were needed to ensure our app could scale intentionally. I built a new navigation bar that could house access points to our existing and upcoming high-level features. I also updated the content container to no longer be two subcomponents (a separated header and its content below). Instead, the content container became one element that stayed consistent for users and overall had fewer moving parts.
I wanted to make sure this content container pattern could scale across the different levels of both map and data views for our users. At the core of their experience, they are interacting with a map and the data associated with that particular map view. The only changes should be in the content itself and what functions are available at those different touchpoints. As shown below, that content container pattern scaled very gracefully across the app and provided users with a consistent UI. This helped build familiarity with the app so they could spend more time and energy focusing on what’s important, the content.
I then put this all together into a tappable prototype showing how the user would interact with these new patterns across the entirety of the app and its functions. This prototype was used to share with the larger team, users, and stakeholders to communicate where the future of Unearth’s mobile app was heading.