Rendering of a mid-fidelity PDF Editor UI

PDF Management App Redesign with AI Components

For six months, I led a small team of designers and developers to redesign a file management app for a federal agency.

Role:

UX Designer

Team:

3 designers, 2 developers, 1 SME

Tools:

Axure RP

Skills:

Research, UX Design, Usability Testing

Overview

As a UX consultant for Publicis Sapient, I redesigned a PDF viewer and editor integral to client case processing. This project included significant UX redesign, a complete UI update — including the creation of a dark mode design system — new AI components, and two rounds of usability testing to validate the design.

Problem Space

The PDF viewer used unique UI and UX patterns unlike any other client product.

The old PDF viewer was difficult to learn and clunky for even the most experienced users. Usability issues compounded with lack of familiarity: the PDF viewer stuck out like a sore thumb. It used an entirely unique "design system" and unusual UX patterns that had been created ad-hoc by engineers over several years. Even the flow for leaving and editing comments was unique despite a unified commenting flow in the client's official design system.

Challenge 01.

The interface ignored design system standards. Out-of-date design patterns had to go.

In addition, the old tool lacked options for users to quickly navigate to pages of interest. To find the pages they were looking for, users had to scroll up and down through the file or memorize page numbers to skip ahead. Often, users had to cross-reference two pages in the file, scrolling repeatedly between them. We knew we could do better.

And we had an opportunity to make changes.

Opportunity

Opportunity

New case types were introduced in 2023 that increased the number of pages per case from ~50 pages to several hundred pages. In the old PDF viewer, users wasted time scrolling back and forth between pages of interest. We needed a navigation overhaul to address evolving requirements and get ahead of the old app’s usability pitfalls.

At the same time, the client had recently introduced a new AI tagging feature to label documents. This new tool had the tech, but needed an interface to go with it.

So we made a research plan.

Research

Research

I created a user research plan that focused on how users navigate the app and their pain points. We combined observation with open-ended questions to get deeper insights. I wrote the script, ran point on user recruitment, and moderated the sessions. Then, I synthesized our findings and presented to stakeholders, sharing insights that earned buy-in for the redesign.

Role:

Moderator

Timeline:

Six interviews in two weeks

Methods:

User Interviews

Deliverables:

Stakeholder presentation, design recommendations

Research Findings

Users span multiple career levels with different priorities and preferred ways of working.

Some users scrolled through files, from Page 1 to the end, for a cursory skim before going back to review in detail. Some preferred to flag pages of interest using thumbnails without the full read-through. Others spent time messaging back-and-forth with their direct reports to discuss the documents.

Across the board, users expressed dissatisfaction with the current design patterns for navigation and commenting.

Challenge 02.

The redesign needs to maintain flexible workflows and improve navigation for large files.

Each user expressed distinct preferences and workflows, which presented interesting design challenges and required our team to prioritize flexibility. While some were interested in features like filters, others tended to resent extra visual noise on their screens.

AI tagging was underutilized and did not significantly improve productivity.

Users reviewed, annotated, flagged, and approved anywhere from 10 to 40 pages of documents at once. Those documents ranged from driver’s licenses to federal forms to doctor’s notes. Users referred to specific pages of evidence multiple times and scrolled back and forth between pages often. Users could always tag pages with document labels, but had not yet worked with AI tagging.

We discovered that users were not utilizing tags - they were using comments instead.

Challenge 03.

The user-generated tagging system was unusable. To promote the AI tagging enhancements, the entire flow had to be fixed.

Users had to choose tags from a single-select dropdown with hundreds of potential tags and no typeahead. Understandably, they preferred to leave the comment “driver’s license” rather than scroll through the dropdown to tag. We knew that we had to fix the tagging pattern to get value out of the new AI tags.

Results

Results

We took our research findings and ran with them.

01.

We refined the copy across the platform to make it more intuitive and accurate.

02.

We overhauled the tagging system to prominently highlight AI-generated tags, improving transparency and user trust.

03.

At the same time, we streamlined commenting and tagging to match design standards and increase efficiency.

04.

We introduced new filters, allowing users to narrow down results and find relevant content faster.

05.

Finally, we improved navigation to streamline the user journey, reducing friction and enhancing overall usability.

Introducing AI

This was one of the first and most visible applications of AI in the client’s ecosystem, making it critical to set clear boundaries between AI-generated content and user-generated input and establish a design system for AI tools. Initial user feedback on AI was mixed-to-positive, with some users feeling hesitant about the integration. We want to set a precedent of transparency, user control, and trust.

AI tagging components with new color and styling standards.

We introduced the magic wand icon, an optional purple colorway, and an expanding badge component. These visual cues will help users identify AI interactions easily, and also set a scalable standard for upcoming AI tools.

AI tagging could be a big help to these users, but they have to trust it. I’m hopeful that thoughtful design can convince even the most stalwart skeptic to give it a try.

Streamlining Tags and Comments
AI tagging components with new color and styling standards.

Our research showed that users often used comments as makeshift tags because scrolling through a long tag list was just too tedious. This workaround was fast but led to files cluttered with pseudo-tag comments, making genuine annotations hard to spot. To fix this, we switched to a typeahead dropdown, making it quicker and easier to find the right tag.

AI tagging components with new color and styling standards.

We separated user-generated tags from AI-generated ones, with any edits to AI tags automatically moving them to the user-generated section. To keep improving AI accuracy, we worked with the machine learning team to capture data on any tagging inaccuracies.

Navigation
AI tagging components with new color and styling standards.

We made the filters and annotation panels collapsible, giving users more control over screen space to suit their workflow. Before, thumbnail previews of each page were hidden in a panel, but widely used for navigating large documents. We elevated thumbnail navigation to the same level as full-page scrolling in the information hierarchy using a view switcher.

While thumbnails are great for at-a-glance navigation, we worried that they would start to blend together in documents with hundreds of pages. We added subtle metadata, like comment count and tags, to make it easy to spot pages with annotations.

Filtering
AI tagging components with new color and styling standards.

Finally, we wanted to address the biggest problem with the old design: too much scrolling. We added filters for both full pages and thumbnails, making it way easier for users to jump to specific sections in the file. Filters also help cut down on visual clutter by letting users hide unrelated content. We fine-tuned the list of filters based on user feedback and tested the updated flow. Users were excited to be able to filter by tags, comments, and bookmarks to engage with only the most important pages.

Testing

Usability Testing

I mastered Axure prototyping techniques to create a fully interactive prototype for user testing. We had users complete tasks in the prototype and using the old app.

Role:

Moderator

Timeline:

Fifteen tests in three weeks

Methods:

A/B Testing

Deliverables:

User feedback summary, task success rates

When we put it in front of users, the new PDF viewer performed significantly better in terms of navigation, and the feedback was overwhelmingly positive. Users especially loved the new commenting system. Some users were looking to fact-check the AI, while others were excited about its potential. Most importantly, all users clearly understood when they were interacting with AI and how to integrate it into their workflows effectively.

Now we are confident that the redesign improves usability and efficiency.

Handoff

Handoff

Before happily ever after, there was handoff documentation.

Making the prototype a reality involved a couple new components, updates to the design system, and a navigation overhaul. Intense documentation and close collaboration with developers made it possible.

We added new AI components to the design system and React Storybook. I wrote design specs and tested the new components for Section 508 compliance.

Users were excited to see the client’s design system applied to the PDF management app. But there was one thing they wanted to keep from the old design: dark mode. Looking at hundreds of PDFs is hard on the eyes; dark mode soothes and helps them distinguish pages from annotation tools. We created a dark mode version of the design system using design tokens.

Finally, we baked in web analytics to every new feature. Data-driven insights are crucial to understanding the performance of a new design and to inform future decisions. In the future, updates to the PDF management app will be able to leverage analytics in a way that we could not.

Takeaways

Key Takeaways

Leading this project was a major responsibility and fantastic learning experience. Speaking with real users to learn about their day-to-day frustrations and motivations was enlightening. I made impactful design decisions ranging from UI updates, to new feature designs, to tagging taxonomy. I presented work to the client in multiple contexts including brainstorming sessions, usability study recaps, and legal reviews. I learned a lot about advocating for design in consulting contexts and how to relate human-centered design principles to business priorities.

And at the end of the day, I am confident that the new product will address real usability issues and scale to accommodate changing business interests.