How to improve the search for a vehicle in a car rental company?

Project Overview
Europcar is known for proposing vehicles for rental to their B2B and B2C clients. They are currently working on rebuilding the experience on the website to offer a fast, seamless & trusted experience.
My role and team mates
Integrated into the Product Line since the sprint 0, I've started working on this case as Middle Product Designer in January 2020. The team is composed of 1 Product Manager, 1 Product Owner, 1 Product Designer (me), 1 Senior Product Designer (20% on the project), 3 front-end developers, 3 back-end developers, and 1 Quality Assurance Tester.

Empathize

Understanding what a search engine is

If we look on the internet at the definition of a search engine, here's what we'll find:

A search engine is an online tool that searches for results in its database based on the search query (keyword) submitted by the internet user.Source: https://mangools.com/blog/seopedia/search-engines/

In Europcar's case, the Search engine helps the company to better serve the user needs and therefore to present them a list of vehicle availability based on the information they've fielded in the search engine. Below, you can see the legacy search engine on Europcar's website.

Exploring the human context

Before starting this project, Hamza (one of the designers in the team) has worked on creating personas. He looked at the data that we have on Google Analytics and conducted user interviews. After finalizing it, Hamza made us a presentation and gave us all the documentation. We've used this resource to better understand how our users were interacting with Europcar's service.

A focus on the experience map

Because we were starting the work on the new website by doing a focus on the search engine, I've selected this particular segment of the experience map to better understand how users were interacting with it.

Define
the problem

Understanding the problem

When looking at the experience map, we've seen that the first step of a user who wants to book a vehicle is the search engine. For users, it's a very exciting part of the experience. They're looking to book a vehicle and they have good expectations.

But when looking at the data that we have on Contentsquare, we can observe that 18.8% of our users (on a 3 months period and with all visitors) are leaving the website from the homepage and a study of 50 sessions recordings showed that users were mostly leaving after using the search engine.

Defining our challenge

Our goal is to offer a mobile-first product with a fast, seamless & trusted experience to our prospects, customers & internal users who encounter difficulties in accessing and interacting with our website. 

Understand our users behaviors

A focus on the use of the search engine on our different products

On the iOS application

Samuel, the Senior Product Designer who worked with me on this project, had already conducted a user remote test of the iOS application in October 2019.

From a previous site redesign test made by an agency

At the right beginning of this project, Miguel (the Head of Product Design at Europcar) showed me a wall with printed mock-ups and flows of a previous site redesign made a year ago by an agency. When asking if this work had been tested by users, the answer was: "yes, they did a guerilla test but we don't work with them anymore so I don't know if I can get you the results of the test". More than just the result, I was wondering how the guerilla test was done, to which type of users, with what kind of protocol, etc.

Because I had no insights about what users were thinking about this proposal, I've launched a user remote testing with 50 users from English-speaking countries. I've worked on creating the prototype and because it was my first time doing a user remote test, Samuel gave me some advices: what type of questions you ask users, how to deal with the tool Testapic, what is the process, how long would I wait to have the 50 users' results. After learning all these concepts, I was then autonomous to launch a new test whenever I was feeling the need.

Here's what we've learned from this test:

A summary of what we've learned

Develop
the solution

A brief focus on methodology

Because I was working with a front-end developer who is living in Portugal and in a team where remote is friendly, I've wanted to make sure that every idea I was testing was documented to better involve the team when I had technical or business questions. With the help of the PO, we've created a design kanban to treat all the use cases of the search engine. We've defined all the steps and rules for a card (design task) and I've created a documentation template to use every time the PO or designer creates a new use case.

How about the flow?

With POs and developers, we've organized a workshop with the framework "As a user, I can..." to define the different actions that a user would do while searching for a vehicle.

Then, we've used Miro to go deeper into each step and define the business rules with POs. For example, a user can select a pick-up location. The search API will understand if a user is typing an airport, a train station or a station in a city. Those kinds of rules helped us better design the label of placeholder for example and to design with development and constraints in mind.

Let's have a look at the standards

We've started to do some specific benchmark on desktop and mobile on how other booking interfaces were dealing with their search engine. We didn't only look at what our direct competitors like Avis or Hertz would do, but we've defined a list of 40 booking services that would better represent the standard of booking in different types of categories:

When starting this benchmark, we were having some questions:

  • How are those services dealing with their search engine?
  • Is there any standards?
  • Are they displaying all the fields directly or are they using a search engine divided in different steps?
  • What is the best place in the page for the search engine?

For example, for the third question we saw that some companies like Enterprise were only displaying one form field at the beginning and then when the user had filled out this field, the other ones needed to start the search would be displayed. We also saw that some companies like Booking were already displaying all the form fields for the user.

Flow sketching

With the help of our previous observations, we've started to look at how the flow would be visualized. I've sketched with pen and papers a few wireframes to see how it could work (I was s focus in this phase that I've forgotten to take a picture of that, my bad).‍

Visualizing the all search engine flow

Because I like to see as early as possible how users would interact with the flow, I've decided to take those wireframes and to visualize them with Whimsical.

If we focus just on the first step of the users, here are two solutions for the search engine that were coming to our heads: the first with just the field to search for a location and then the other one with the location, date and time picker.

Taking each part of the flow in consideration

We've used data to improve some particular points of the search engine.

To give you an example, when looking at data that we have on ContentSquare (below you'll see a visualization of it), we have seen that the click rate on the vehicle tabulation on the search engine of the homepage is really low: 1.07% on all visitors over a period of 3 months. On the other hand, the click rate of the same vehicle tabulation (with more items) on the Select Page (which you discover after clicking on the Search button of the search engine) is much higher: 5.8% on all visitors over a period of 3 months.

A few elements can explain those differences of usage:

  • On the home page, the vehicle tabulation is displayed with a green background on another big green background. Therefore, the contrast of this block is lower than the other elements. Users might not see it, therefore they will be more likely to use the one on the Select Page.
  • On the home page, the vehicle tabulation looks to be out of the search engine. On the other hand on the Select Page, it looks integrated to the top section of the page. Users might not understand that this element is a part of the search engine.

You'll see on the next steps how we've worked on optimizing this part of the flow and create a better hierarchy between the different components of the search engine. 

Here comes Sketch

As an ex UI Designer (oh yes, I said it), I love defining what value a component has on the page. Is it the most important one? How essential is it to the users? Is it here to be decorative or informative? What would it add to their experience?

That is why I love the cycle of design iterations. You have all the user research resources in mind, the flow, the wireframes, you look at the design system and see what kind of lego blocks you could play with and then: it's time! You go on Sketch, set up the grid, create an artboard and the fun comes up!!

Usually, I don't like having a clean file when doing that step. I think it's important to go with the flow of ideas.

The flow comes to life

Because I always prefer to interact with a dynamic flow and have the ability to test what I design, I've worked on creating prototypes on Marvel. It has helped us to better understand and define all the cases, to better communicate the design vision of the product with front-end developers and POs. 

It's time to test!

On the desktop part

We've wanted to have some insights from users on what they were thinking about the two different behaviors (1 field or 3 fields) and on the global flow of filling booking information. Therefore, we've launched two users remote testing on desktop (via Testapic).

We've defined that we would have two panels of 40 users: panel A will test the 1 field prototype and then the 3 fields prototype and panel B will test the 3 fields prototype first and then the 1 field prototype.

For both panels, we've asked Testapic to have users with mix gender, from 25 years old to 65 years old and from English-speaking countries.

Our user remote testing methodology
  • Define the hypothesis
  • Create the test protocol
  • Define the panel with Testapic
  • Create the prototype
  • Integrate the test protocol in Testapic and test it
  • Launch the test
  • Be patient and wait with a cup of (green) tea for the 80 user's feedback
  • Start the analysis
  • Share the results with Product Line, Design Team and Europcar Mobility Group
On the mobile part

We've decided to do a guerilla testing to have some insights on the prototype. With Samuel, we went to the coffee corner at St Lazare Train Station, in Paris. We've interviewed 5 users, from 22 to 54 years old. We had 1 woman, 3 men and a couple of a man and a woman.

We gave some context to the users: "You're going on vacation and would like to book a car for a road-trip in the UK. You've already booked your flight tickets and now you've searched on Google and clicked on the first results to book a car."

Then, we let the users do the test with only in hand:

  • A phone with the prototype
  • A fake boarding pass

What did users think of those solutions?

After getting our 80 users from the user remote testing and 5 users from guerilla testing, we've started the analysis. Here's some data that helped us prioritize the things to improve:

On the desktop part

The pourcentage is based on both panels combined so 80 users.

✅ 83% of the users like the ease and clarity of the flow

"The search block looks very clear and easy to follow. No clutter on the page that distracts you from what you came there for. A nice clear and professional layout."

✅ 10 users specially mentioned that they like how next step opens automatically

"I liked the way the calendar and time options were automatically displayed as being the next steps without having to select anything."

🆘 22,5% of the users don’t understand how to select a different return location

"I wouldn't know how to change my return location because there is no field for that option."

Like we talked earlier on this case study, one of the things we wanted to test was the perception of the 1 field vs 3 fields search engine. 

To collect those insights, we've first asked Panel A and B to give their opinion on a scale from Easy to Very Difficult when looking at the search engine.
Then, at the end of the test, we've introduced them to the second prototype (for Panel A the 3 fields, for Panel B the 1 field) and ask them the same question.

Below, you'll find the results of the study. On the overall look, we can see that both proposals are considered Easy or Very easy by most of the users. But when looking dipper, we can analyze that even if Panel A find easy the 1 field prototype at first look, the Panel B, on the other hand, feel that there will be missing information and that the usage will be at 13% moderate, at 22% difficult and at 4% very difficult.

For the 3 fields prototype, we can see that both graphical have good insights. Therefore, we've decided to go with this solution.

On the mobile part

The pourcentage is based on our 5 users panel.

We had very good feedback from users on the guerilla test. I was very happy to see users play with the boarding pass and the prototype in their hands. To see them get frustrated or to succeed tasks was really a joyful moment for me and Samuel.

✅ 100% of the users filled their booking information in less than 20 seconds

"Oh, it's already done ?"(after playing with the prototype to fill the form)

✅ 100% of the users find the process very easy and intuitive

"Easy to use, it's ergonomic and simple."

⚠️ When asked, 3 users didn't understand what contract ID meant

"It's for protections maybe. Oh, wait, no it might be for people that already have an account."

🆘 70% of the users don't find a way to return the car at a different location

"I have the impression that I can't return the car to a different location. I feel stuck."

What are the things to improve?

The analysis of both test helped us prioritize tickets on the design kanban:

  • How to define a different return location?
  • How to improve the understanding of the contract ID?

Refine
the solution

How to define a different return location?

Both desktop and mobile user tests were unanimous, we needed to iterate so that a user could choose a different return location.
To give you a bit more context, this case is tricky since the beginning because we have two different views of seeing this case: from the business part and from the user part. From the business part, we don't want to encourage people to move vehicles from one station to another because it represents a cost of repatriation. But from the user part, it represents 20% of our customers so we shall propose the feature.

Therefore, Samuel tried different ideas and we've gathered together to discuss them. We were looking at two different options. After some talks about it, we've decided to go with the first one that was adding more consistency to the experience and sounded easier.

You can see below the winning version. If users want to select a different return location, they uncheck the "Same return location" checkbox and type in the "Return location field" their choice of return location. If they change their mind and want to have the same return location, they have the ability to do so by clicking on the check-box again.

How to improve the understanding of the contract ID?

Users have trouble understanding what is a contract ID. And to be totally fair, I understand it! That was one of my first questions when I've started working on the search engine: what the heck is a contract ID? Here the answer: a contract ID is a numeric-only discount code. They're available for companies that have a partnership with Europcar. Now that we have this information, we needed to find a solution to improve the understanding of B2B and B2C users.
In order to fix this problem, we've changed the copywriting "contract ID" to "negotiated rate" and we've had an informative icon button that on hover opens a tooltip that explains the meaning of it.

Going deaper in the flow

Now that the regular flow has been tested, it's time for us to go deaper into the dark cases of the search engine.

List all the case you had to work on.

Deliver
the solution

Once we had validated the first version of the MVP with user testing, it was time for us to prepare the delivery of the search engine for the front-end developers.

How did we manage the copywriting?

On sprint 1, we had the great opportunity to welcome Belèn in the design team. She's the copywriter for both Europcar, Ubeeqo, Interrent and other companies that are in Europcar Mobility Group.
I've met with her on a video meeting (she's based in Barcelona) to present her the project of the new website. We've created a Copywriting Review template to make sure we gave her enough context (prototype, screenshots, documentation on Notion) on our copywriting proposal.
After getting her feedback, I've updated the documentation on Notion to gave all the right information to the POs and front-end developers.

Design System

This project was the opportunity for Europcar to implement the YES Design System (Your Europcar System). The system designer Ali had already worked on UI components and now, we had the combination of design and code components coming together.

During all the work on the search engine, we were sharing with the System Designer our iteration phase while letting him access to files on Sketch Cloud. So when all the use cases were done, we've decided that we'll use Jira to create the tickets of our need in terms of components: update an existing component or create a new one. Some components like the form inputs were in the need of evolution: one of my focus when arriving at Europcar was the accessibility. I've made an audit of the UI kit to determine what would be the action to do to have AA compliant components. With Ali, we've worked on implementing those rules in the design system, starting with the form components.

To Zeplin and beyond

When all the components were created, updated and tested in Sketch, Ali and I worked on their implementation on Zeplin (the tool we use to do the hand-over with front-end developers). With all the variables, front-end developers had all the information to create a form input with his default, hover, focused, the value entered and error states.

What I've learn

Reflecting on this project, I had the great opportunity to learn a lot. Here are a few things that I'll remember for the future: