Expedia

Developer Journey


UX | UResearch | Lead | Product Management

I accomplished designs and led a team of designers for Unified Developer Portal that uses Spotify’s Backstage.io. The portal is a central place for developers to discover, manage and contribute. The company-wide initiative that reduced the number of developer tools by 39%, created major engineering changes, and successful adoption of Backstage portal led to $42 million dollars in savings and productivity boost by its second year, and continued to grow.


I directed to create a unified vision and improve the journey of Expedia Group’s developers. I worked across teams, departments, and partners. The outcome simplified platform adoption, decreased time from ideation to real product, enhanced security, and improved financial performance of the company.

Portfolio
          

To comply with my non-disclosure agreement, I have omitted and obfuscated confidential information. Information in this case study is my own and does not necessarily reflect the views of Expedia Group.


research

Outline

Summary Prototypes
The Story
  • Discovery
  • Personas
  • User Journey
  • Friction
  • Strategy
  • User Research
  • Success Metrics
  • Competitive Analysis
  • Metric Ideas and Designs
  • Prototypes
  • Learned

Summary

Situation

Challenge

Expedia Group’s developers had many pain points across their development journey such as onboarding, search, access to tools, learning tech stack architecture, useful documents for writing code, etc.


Goal

The goal was to envision the future state of Backstage’s UX areas to solve key pain points across the developer journey. The emphasis laid heavily around its weakest areas: adoption and technical documentation.


  1. Simplified platform adoption
  2. Bring ideas from imagination to the real world in close to zero time as possible
  3. Reduced security vulnerabilities
  4. Financial improvement

My Roles

For duration of the future state envisioning, I played the following roles:
1) UX design lead 2) UI designer 3) UX researcher 4) Product manager 5) Engineer

Team

Over several years, product and engineering teams had identified initial personas, friction, tools, and process at a high level. They focused mainly on identifying problems. I was appointed as the lead person who’d provide solutions to those problems identified by envisioning the future state of Backstage

research

A steering committee (SteerCo) started with myself (Sr. UX Design Lead), several directors of product, and two UX designers. Product and engineering also came up with general ideas. I took mine and theirs to come up with more detailed flow diagrams as well as interactive Figma prototypes.

During a year period, Backstage’s future design envisioning happened in several phases - focusing on various needs that were deemed important. Toward the last phase, the steering committee evolved to myself, engineering manager, product managers, and product director.

Action

ux to Outline

Design Challenges

research

My Contributions

I led the steering committee and came up with not only greatly improved UX ideas, but UI, and some engineering as well. The process spanned across a year period with a good portion being accomplished near its end with key committees. I also provided design team resource estimation toward our efforts.


SEE IT

I kicked-off the one to five year vision by creating a product strategy document - a very high level overview. The document informed audience categories we’ll be working on:


  1. Purpose
  2. Personas
  3. Pain Points
  4. Competitive analysis
  5. Proposal
  6. Finally, value the future state will deliver

FEEL IT

On a regular basis, I identified the most important things I needed to learn, and gathered information through direct interaction with potential customers. I had interviews and surveys, but also took measures via an analytics tool called Glassbox.

Different engagements occurred toward this effort. Toward the latter envisioning segment, I collaborated with my core team on a weekly and bi-weekly basis. Overall, the process went like this for me:

Monday:Identify the most important thing to learn this week.
Monday - Wednesday:Interview customers. Use cases, market analysis, diagrams, prototypes, feedback.
Thursday:Run further interviews with potential customers. Receive feedback from pms.
Friday:Present design progression to stakeholders and leadership for feedback.
Synthesize learning. Run retrospectives about the week and what to cover next week.

BELIEVE IT

The future state that was envisioned got confirmed through interviews and surveys throughout the planning process. These were done through various media: online forms, Slack, emails - performed by me, designers, and pms. They also will continue to be tested and iterated.

research

My Design Approach

The design process for Backstage typically flowed in the following order. Based on my interaction with clients and research, I created designs one wave ahead of the development team. One wave was two months. I held meetings where design specs were discussed, and upon their initial creation, following discussions were conducted to ensure the product was according to client's needs and engineering teams' ability to deliver. After the development they went through the UAT process.

The future state envisioning's ambiguity and complexity led to an additional challenge. I took myself through deeper level design thinking and design system processes to engage them.

View a detailed Alex's design process.

Result

ux to Outline

Impact highlights

Developer onboarding went from 3 days before committing a code (creating an initial test piece) to 5 hours.


Consolidation of tools, major engineering changes, and successful adoption of Backstage led to $42 million dollars in savings and productivity boost by its second year, and continued to grow.


Backstage design system I created promoted self-service model. Its successful creation and my collaborative persuasion toward its adoption helped lower designer resource usage by 30-35%.


Future state designs were conducted via workshops and created.


Features, codes, and UX contributions to the open source community improved the OSCI (Open Source Community Index) ranking from 167 to the 98.

            research

Outputs

  • North star documentations.
  • Roadmap worthy ideas and designs.
  • Competitive analysis.
  • Journey maps, mind maps, affinity diagrams, heat maps, and flow diagrams.
  • User research metrics.
  • Optimized processes, tools, pain points, and opportunities through user research.
  • Simplified navigation and search.
  • Metrics ideas and designs. With a capability to view at aggregated level and drilldown to individual levels.
  • Personas: identified, prioritized, and created.
  • Many interactive prototypes at all levels.
  • Role based access scenarios: alerts, dashboards, plugin management shut off switch, flags.
  • Plug-in contribution model.
  • Multiple diagram viewer: high-to-low level views of flows, tools, pain pts, and opportunities.

Examples of Prototypes I Created

Documentation (Figma) Relevancy Governance (Miro) Search (Figma)
research research research

Some of the needs were responsibilities of PMs and engineers. It also consisted of backend work. To aid this effort, I worked on providing solutions to enhance both frontend and backend - covering UX, UI, product, and engineering.


Medias I Used to Brainstorm and Present

  • Documents to write product strategy overview.
  • Miro digital drawing board both create 50,000-foot overview strategy, as well as granular flow diagrams.
  • Miro and Trello boards were used used to plan, prioritize, and vote.
  • PowerPoint slides were used to start meetings, going over sprint agenda. In certain meetings I transitioned into either Miro or low, mid, and high prototypes created in Figma - going over friction and how new designs will improve the user experience.
  • Meetings ended with learnings and next week’s agenda.

Prototypes

ux to Outline

Here are some of my many future state ideas that materialized into diagrams and prototypes.

Documentation Landing

I simplified and improved the current landing page. It's more accessible.

research research
research

Onboarding and Learning

This addresses pain points of new members onboarding the dev community. Time based cards provide learning materials that are contextually relevant to specific users.

research

Document Homepage

This is a new homepage for documentations. It’s designed to be highly customizable according to personas.

research

Document Videos

research

Document Editor

Document editing process was broken. Users were going through GitHub or editors such as MS Visual Studio. These methods meant codes had to be manually corrected. Another way was to use SSG (Static Site Generators) command line editors.

There were a handful of different ones developers used. These also posed problems because people didn't know about these tools, non developers had learning challenges, developers went with SSGs that were written in the language they were used to.

research

I proposed an editor with the following capabilities.

  • Simple and intuitive to use.
  • Templates to speed up document creation and management.
  • AI driven template suggestions based on the user’s role and usage.
  • Related page recommendations.
  • AI driven proofing and governance process for quality control.
research

Documentation Insight

I ideated key metrics that’ll inform and empower users to glean insight on their apps and content. I also created this unique table design that allows the design to scale. Users can view data at a high level or drilldown to view details. It contributed to Spotify’s open source components and was an improvement to Spotify’s existing collapsible table design.

research

Adoption and Governance

I created this process in Miro to improve the user adoption rate of Backstage and fail safe governance steps toward its success.

research
research

Unified Search

Users were going to other pages or portals to get information for various reasons. In doing so, they were wasting a lot of time. This process would reduce context switching and allow users to be able to find what they want from one location quicker with better outcome.

research

Relevancy Score

We wanted to raise the quality of documents and apps users were creating. I came up with this flow with key business metrics that’ll ensure that.

research

The Story

ux to Outline

Discovery

Over the last several years, product and engineering teams identified developer journey and friction. Expedia’s 20,000 services, APIs, and tools weighed heavily on cost, duplications, and complications. The company wanted to cut costs and quicker turnarounds. Developers wanted simpler experiences. This led to the adoption of Backstage.io - an open source platform originated from Spotify, which offered these advantages.


  • Deliver frictionless developer experience by offering a single pane of glass for EG engineers to contribute, discover and manage their developer ecosystem
  • Unlock exponential growth by accelerating adoption of paved road tools by leveraging Backstage as developer front door
  • Drive engagement with the open source community by contributing back our EG internal best practices and developer solutions

Backstage made Expedia’s applications more manageable. Among them, were 230 developer tools (i.e. AMS metadata, GitHub, Spinnaker, Jenkins, etc.). My job also included improving the developer journey by optimizing their capabilities through Backstage.

Developer Journey phases were as follows:

  • Onboard/Set
  • Build
  • Validate/Test
  • Depoy/Release
  • Run
  • Monitor

research
research

Personas

There were many personas among the developer community. For this work the focus was mainly on Full Stack developers - who filled about 55% of the developer pool. Out of these I created several groups.

research
Producer (Figma) Consumer (Figma) Admin (Figma)

Producer and Consumer

Producer (owner) could: view, create, edit, archive, delete; store, organize

Consumers could view the data, comment, and contribute toward quality scores. Majority of devs were consumers.

Admin

Admins had all of users capabilities plus ability to manage, halt, or delete applications. They could also oversee the quality and governance process.

Friction

During user research done by team members and myself the following pain points were identified. I took these and looked for opportunities to enhance user experience from all fronts.

  • How do we automate access and provide clear instructions?
  • How do we show preferred tooling?
  • How do we create an intuitive search with clear navigation?
  • How do we teach developers about the tech stack and preferred tooling?
  • How do we teach team-specific tooling & tech stack?
  • How do we create both 1st party + 3rd party education resources?
  • How do we create golden path documentation?
  • How do we create a personalized experience?
  • How do we improve the Bootcamp * ?


* Bootcamp is a one day onboarding E2E flow development exercise which a newly joined developer goes through setting up environments, gaining access, as well as creating and deprecating a basic “hello world” app.

research

Journey Map

Developer Journey phases were viewed as follows: Onboard, Build, Validate & Test, Deploy & Release, Run, Monitor & Alert. This one immediately below (Example - Experience Mgmt. Platform) and the above heatmap were researched and created by other team members over a period of several years.

research

I took some of the identified friction and heatmap, then applied some numbers to create a new heat map below. User research showed developer’s build was taking 70% of the dev journey. As part of the North Star vision, I recommended investing in AI code generation that’s based on machine learning (ML) to our division head. It could boost productivity.

This summer of 2022. AI concepts were still relatively new. The AI frenzy didn’t occur until late 2023. My knowledge of the tech trend and intuition told me this was happening and would be good to start early. I was right.

research

Strategy

Challenges

There were a number of challenges.

  1. There were too many areas to cover. The expectation was that this would be an incremental journey, but picking where to start was challenging for me.
  2. There was a disconnect between the upper management and middle management. There were ambiguities, changes in direction, and scope. They also tried to go too fast.
  3. The engineering team struggled to see how we could execute on a vision that was “too big”.
  4. Some of the steering committee members lacked industry experience and couldn’t grasp technical solutions provided.
  5. Technical depth of Backstage was underestimated by some of the steering committees and new product managers who participated after the reorg.
  6. While the concept vision resonated within the organization, lack of resources meant we lost momentum to get into execution in some areas, and had to leave undone.
  7. As a result, in some instances communication and productivity didn’t reach the level that could’ve been.

User research was conducted. However I proposed for more. In some instances the new middle management didn’t quite grasp user experience design and the importance of user research. I was able to discuss the matter with an upper manager who was in the steering committee and got the support.

Scope

As I said previously, the future state envisioning I was involved in went in several spurts over a period of one year. When we started the last segment after the reorg, I proposed that we use the existing roadmap to envision one year ahead, then based on tests and iterations, proceed to plan toward three to five years. Along with that, a clear picture of how our product and platform would be five years later, so we can work backwards toward that ideal state.

To have a great impact, we need to think bigger and farther than how typical companies strategize. Yes, that involves taking greater risks. Technology changes fast and will continue to accelerate. That is why I keep up to date on emerging technologies and design changes in the market.

My recommendation was to focus on the greatest friction points and underserved segments. Next, identify least effort and maximum impact areas. So we broke down the key value adding elements of the design.

research
research
research

User Research

Research consisted of interviews, personas, diaries, journey maps, surveys, AB tests, card sorting, tree testing, swarm sessions, analytics tools, etc. I’ve learned that it’s better to conduct user research with a smaller core group of people than doing a shotgun approach to a large group. Shotgun approach is helpful at times, especially when starting out, but for a longer term, you want to stay with the smaller core.

The core user base was selected the following way. I sent out initial surveys to do AB tests starting with the larger group of backstage community. Out of these hundreds of people, about eighty responded. Among those, some responded not only with multiple choices, but also filling out comments.

I took those eager participants and continued to funnel down to find our core user test group. I was like a miner panning gold - after some effort and sifting, gold nuggets were found.

research

Success Metrics

For qualitative measures I conducted interviews observing product usage and experiences, until I wasn’t learning anything particularly new. These methods were performed mostly with an audience of six to eight per segment.

Surveys were conducted before and after iterations to test my hypothesis and designs - on average about every three to four weeks. At times Slack or emails were used for simpler feedback. I chose NPS (Net Promoter Score) as one of the key metrics.

Initial stages with ambiguities warranted for testing against greater numbers. When I was more clear and certain, it made sense to send to a smaller pool.

research

Competitive Analysis

Through competitive analysis, I found the approach industry setters and our competitors are taking to learn, improve, envision, and adopt. By leveraging existing and proven methods, I was able to cut through hypotheses and got us to reach our goals quicker.

I am a proponent of creative thinking and design. Radical solutions are needed at times. However, in most cases it’s best to utilize what design solutions users are already accustomed to and market leaders have established.

People like and easily adopt what they are used to. That’s the type of control wheel I like to put into our users’ hands. These are some of what was referenced and compared for inspiration.

research

Metric Ideas and Designs

These were some of the metrics I ideated and designed. I created prototypes to display at aggregated level with capability to drilldown to individual levels.

Document Set   Aggregated Document   Search  
research research research

Ease of Discovery

What is the E2E flow of a user in finding what’s needed? How do users go through search and navigation menus? I wanted answers to an individual contributor’s ability to go from being stuck to unstuck in just a few seconds or minutes. We found that it's extremely important for a growing engineering org to maintain a shared knowledge base that’s easy to navigate and explore with the help of search and navigation flows. Metrics like search success rates, click-through rates, and search results relevance were targeted.

Reduced Context Switching

Reducing context switching can help engineers stay in the "zone". We measured the number of different tools an engineer has to interact with in order to get a certain job done. I envisioned features that allowed users to get information they needed where they were on the platform, and keep their train of thought connected toward optimal outcome.

Search Result Relevance

Ranking of relevant content developers is searching for out of all search results. I was asked to come up with what's most relevant with just a very general direction. When I inquired further, not much more was given. I utilized my experience and knowledge in this area and created fantastic metrics, which the team and users loved. Monitored by click throughs of search results.

Traditional Metrics

These metrics included visits (monthly active users, daily active users, unique users, total users, etc.) and page views. I also introduced additionals where appropriate:

  • Created issues, pull requests raised in GitHub
  • Quality or relevancy scores
  • Health grade
  • Accessibility scores
  • Date created, updated
  • Summaries, view by dates, sort, and drill down capabilities
  • and many more

Performance Metrics

  • Latency: Improve in milliseconds various sectors such as load times, response times, etc.
  • Error rate: 0.099%
  • Number of plugins built
  • Subscribed (Added to favorites)
  • Repeat users (how many users are active once a day/once a week/once a month etc)
  • Popular templates and pages

DORA metrics are used by DevOps teams to measure their performances: Dev Ops, Research and Assessment.

Mean Time to

  • MTTD: Detect
  • MTTK: Know
  • MTTF: Fix
  • MTTR: Resolve, average summary of the three above

Reliability Metrics

  • Availability: Average Uptime - 99.99 %
  • Latency rate
  • Failure rate (Frequency of deployment failures)
  • Frequency (Average deployments released)
  • Lead time (Time between commit and deployment)
  • Average down time

Security Metrics

  • Security risk plugins
  • Operational risk plugins

Technical Documentation Reach

Total Services in Backstage (~1261); Total services/apps tied to TechDocs - 639.
2023 Goal - Increase TechDocs onboarding for services by 15% (1350)

Backstage Open Source Contributions

Features, codes, and UX contributions to the open source community initially improved the OSCI (Open Source Community Index) ranking from 167 to 107. The additional number of contributions improved it to the 90’s.

Prototypes

Prototypes have been demoed at the  top of this page.

Learned

We received positive reviews when presenting our concepts to leadership. I faced new challenges during this project that I haven’t faced before - seamlessly integrating distant future scenarios into an existing design. Plus, getting asked to tread product and engineering space, which I wasn’t entirely familiar with.

Through this project, I learned the importance of really diving into studying the existing design and flow of the application to introduce new features that look and feel like existing design. Doing research on the background and how the app evolved was also key to identifying the best solutions for the problems at hand.