Re-thinking Buildxact's onboarding: Key experiments that led to a 40% increase in self-serve signups

Overview

The acquisition team at Buildxact ran over 13 onboarding experiments since June of 2023, the net result of which was not only an increase in users reaching core activation features by over 40%, but also improving self serve conversion by over 30%. The estimated impact was a net $1.4M increase in ARR.

My Role

Lead Designer

Duration

June 2023 - Present

Team

1 Product Manger
4 Engineers
1 Copywriter

The Challenge

Of the roughly 1000 new monthly trials, only 44% reached the Aha moment.

Over the years Buildxact had rapidly built features to accomodate customer needs. While this served the product well in its formation years, ultimately this resulted in product with high complexity, diminishing returns in new features built and a 7 month long stickiness metric.

But most importantly the company ARR growth had slowed.

Starting a marathon of Product Experimentation

In June of 2023, we decided to form the acquisition team that looked into the growth aspect of the product. As the design lead for acquisition I was tasked with running product experiments to improve product onboarding focussing on activation and conversion.

Over the next 12 months we ran more than a dozen experiments. While not all yielded immediate success, each provided valuable insights. This case study highlights some of our key experiments and their outcomes.

Experiment 1

The quick start guide: Validating less is more

The exisiting dashboard onboarding task list, while functional, performed quite poorly. While its original intent was to drive users into uploading a plan, which it did, but, it failed to cater to the underlying user behaviours.

Though discovery sessions with Customer success and support, we identified areas for improvement. We underpinned the uplift on a simple premise, less is more.

Data showed low engagement

An analysis of the existing quick start feature for trial users revealed low engagement, with the most clicked action of "Uploading a Plan" clicked by only about 17% of trial users.

Upload plan which was the most clicked feature had just about 17% of the uses clicking on it.

Interface was unfocussed and overwhelming

Long task lists format of the interface is a good when you want to expose advance concepts to users who have already progressed certain basic understanding of your product. For a new user, the long tasks lists format creates unnecessary early friction along with decision paralysis, if tasks are not sequenced correctly.

This was the main directional change that we wanted to introduce. We also identified other general UX considerations for improvement.

1

Crowded layout and lack of focus

The page packed a lot of information, with no particular sequencing or focus, which we assumed overwhelmed most new users

2

Iconography, hygiene and accessibility

Icons that looked very similar with poor colour contrast, making it harder for users to quickly distinguish between tasks visually.

3

Limited context and unclear progress

Tasks had no contextual explanation, further the progression wasn't clear with buttons not reflecting the statuses.

The solution limited options, focussed on the key action

For brevity, I will skip through the multiple iterations and testing that we did on CTA copy and description text. We decided to condense the onboarding widget to 3 key sections, with clear CTA highlight and clear progression indicators. We understood, users getting to estimate costings was a key value realisation moment and we wanted to drive user momentum towards it. The secondary tasks were shown to users only post completion of the first task.

1

Focus is back on the action at hand

The new design emphasised primary tasks and actions, bringing users' attention back to what they needed to accomplish within the platform.

2

Illustration to break monotony

Visual elements were added to make the interface more engaging and less text-heavy, improving the overall user experience.

3

Grater scanability for progress

The layout was optimised to allow users to quickly scan and understand their progress through various onboarding steps or tasks.

Testing a 50:50 split with trialists

The solution was initially tested with a 50:50 A/B split against all new trial users. The performance was measured separately across Australia and the USA, since we knew that the cohorts showed significant characteristics differences.

The solution was initially tested with a 50:50 A/B split against all new trial users. The performance was measured separately across Australia and the USA, since we knew that the cohorts showed significant characteristics differences.

The solution was initially tested with a 50:50 A/B split against all new trial users. The performance was measured separately across Australia and the USA, since we knew that the cohorts showed significant characteristics differences.

The solution was initially tested with a 50:50 A/B split against all new trial users. The performance was measured separately across Australia and the USA, since we knew that the cohorts showed significant characteristics differences.

Results: The variant was a beastly winner!

To ensure statistical significance we ran the experiment for well over 2 months tweaking, copy and CTA as we learned more. The eventual results were without doubt one sided. The new onboarding widget not only had a whopping 197% improvement in driving users to costings, but also uplifted the guard rail metrics around getting users to upload a plan.

To ensure statistical significance we ran the experiment for well over 2 months tweaking, copy and CTA as we learned more. The eventual results were without doubt one sided. The new onboarding widget not only had a whopping 197% improvement in driving users to costings, but also uplifted the guard rail metrics around getting users to upload a plan.

To ensure statistical significance we ran the experiment for well over 2 months tweaking, copy and CTA as we learned more. The eventual results were without doubt one sided. The new onboarding widget not only had a whopping 197% improvement in driving users to costings, but also uplifted the guard rail metrics around getting users to upload a plan.

To ensure statistical significance we ran the experiment for well over 2 months tweaking, copy and CTA as we learned more. The eventual results were without doubt one sided. The new onboarding widget not only had a whopping 197% improvement in driving users to costings, but also uplifted the guard rail metrics around getting users to upload a plan.

fddfsdfs

Conclusion

We knew that getting users to estimate costings was only a part of the problem. The larger problem was in getting them to understand the value of estimate costings. While we were deep in the middle of running this experiment we had already roadmapped the sequence of experiments to run.

In conclusion although we had a net positive impact on driving users to estimate costings and plan uploads, the net impact on conversions and ARR was not clear. The followup experiments cleared some of the haze around this.

Experiment 2

Learn to create an estimate - faster

Our research revealed a significant hurdle in demonstrating the value of our costing features. We discovered that:

  1. The majority of our customers had prior experience with Microsoft Excel.

  2. A primary motivation for migrating to Buildxact was to save time and increase efficiency in estimating processes.

  3. However, without a sales demonstration, most users struggled to understand how Buildxact could deliver these benefits.

To address this challenge, we focused on driving early value realization. We implemented a series of experiments designed to guide users towards understanding and leveraging Buildxact's advantages more effectively.

Friction in the funnel, templates were never used.

A funnel analysis revealed that a large cohort of our trial and subscribed users never used a template or created their own. The interviews that we conducted were quite conclusive on the value of templates in driving retention. Based on these findings, we identified template usage as a critical area for improvement.

fddfsdfs

A comparison of template usage between subscribed and trial users

The interface underserved the template experience

The template selection which had the potential to be a powerful driver of activation was underserved in the product and was hardly discoverable for a new user.

1

Choice Paralysis

The user had to chose to select a template every time they created an estimate adding unnecessary decisioning requirements.

2

Dropdown vs show all

Clicking on the dropdown to reveal a list of templates further increases decision overload.

3

Overwhelming previews

Once in the preview, the interface was overwhelming with unclear CTA. Most users once here never proceed past.

Exploring a visual template experience and doubling down on value via live priced templates

In the solution, we aimed to streamline the estimation creation process, highlighting templates as a part of the process. We also experimented with live priced templates through partnerships with 3rd party providers.

The solution was A/B tested for over a month, before a release to 100% of all trial users.

1

A new grid layout for templates

The new grid layout along with consisted iconography created a visual balance in the presentation of templates.

2

Progressive disclosure

The preview was now part of the process of creating and was progressively disclosed to the user as they selected.

3

A simple preview

The content of the preview was condensed to only show the costings, enabling faster decision making.

Results - An increased template usage among both new and existing users

From the segmented testing it was clear that we were driving the results that was expected of the experiment. On a monthly average approximately 3000 templates were used in estimate creation.This also gave us more insights on the job types that our customers were likely to do.

fddfsdfs

The distribution of template activity for a single month

We planned a staged release of the feature to new users and with each release we saw a gradual uptick in the number of trial users using a template.

fddfsdfs

The graph shows an uptick in the number of users using templates since feature release in Oct 2023

On broader release, the feature also received a lot of love with over 250 users rating it positively and a response rate of 7% becoming one of the highly rated new releases of all times in the product.

fddfsdfs

The survey response for feedback on the new template experience

Key Learnings

  1. Pre-built industry specific templates can significantly impact user retention and engagement

  2. Partnerships with 3rd party, in our case for the Live Pricing can enhance product offering and user experience

  3. Funnel analysis is crucial in identifying friction and arriving at the hypothesis

  4. Aesthetic usability effect is valid

Conclusion

The release of this feature not only resulted in more trial users being activated, but also gave us new insights into the behaviours and characteristics of our users, based on their template sections.

There was a strong co-relation between the rollout of this feature and the growth in ARR specifically in the US market. This kickstarted further segmentation work, that would eventually lead to the proposal for a new lite version of the Buildxact platform catering to the US market.

Experiment 3

To Excel in navigation

A majority of our new users were users either switching from pen and paper or from Excel.

Buildxact's navigation was not only a departure from the conventions of excel like software, but also some of the artefacts used for attributing disoriented the users.

Additional research further highlighted that uplifting navigation can have the most impact on activation.

Evaluation - Previewing a propsal

Previewing a proposal was also a key step in completing the estimating process. This was also identified as an Aha moment, where the user gets to see the results of their labour.

However, most of our users never reached the proposal preview page. For a trial users, the numbers were as low as 16.9% of all trial users.

fddfsdfs

An analysis of the existing quick start feature for trial users revealed low engagement, with the clicked action.

Theory: Perceived blind spot

A well established researches on eye movement tracking highlights that we associated higher importance to the content on the left of the screen and the associated importance gradually decreases as you track right with the lowest point around 2/3rds of the screen and then further increasing to the the extreme right.

Our data on click actions on each of the tabs gave conclusive evidence to this theory.

The quote letter tab, which was the one in question, had the least number of clicks presumably because its position overlapped with the visual blindspot.

Other UX Issues

Additionally, we identified and resolved to fix some general issue in the navigation user experience as noted below:

1

Icons lacked clarity

Users struggled to understand the icon semantics, missing out on core association between an estimate, leads & clients.

2

Title placement disoriented users

Lack of emphasis on estimate titles, mean't users were routinely lost in the information hierarchy of the application.

3

Unclear association and emphasis

Estimate status as an artefact was rarely used. Due to its placement users didn't associate it with their workflow.

Exploring a solution with mental models borrowed from Excel

The solution borrowed cues from Microsoft Excel format of presenting page information. We also added future provisions for notifications and global search as a part of the uplift.

As with rest of experiments the new header was A/B tested for over a month after release.

1

Icons replaced with labels

During user testing it was clear that labels are the preferred means of communicating abstract terms like leads and ref no.

2

A new title with status

Added a new title bar and placed the status next to it, invoking the law of proximity to connect the two concepts.

3

Prepare quote as verb and action

Changed 'Quote Letter' to 'Prepare Quote' as that communicated an action and also emphasised it with visual placement.

1

Quick title edits

Keeping with the conventions of google spreadsheets, users could now edit the title of an estimate from the titlebar.

2

Status and contextual actions

By associating status change with template creation we wanted drive user behaviour to create templates.

Results - More quotes sent, more templates created, positive pivot in ARR growth

The nav was AB tested for a month. During the test we saw a clear differentiation favouring the variant. The navigation not only saw a 67% increase in the number of users visiting prepare quote in-comparison to quote letter, all the other tabs also saw noticeable increase in reach. We further saw 48% increase in number of users sending a quote.

The new nave further drove more users to create templates, with a 6.9% increase in templates created during a trial.

fddfsdfs

Percentage visitors for various tabs for new and old navigation during the AB test

Key Learnings

  1. Reduce friction where possible, the impact may not be directly evident, but cumulatively, they add up

  2. Designing for existing mental models improves conversions, our users familiarity in excel was a key driver for the nav uplift.

Conclusion

The experiment clearly highlighted the need for clarity of presentation as driver for customer activation. for brevity I've skipped some of the iterations on this experiment. But overall the experiment positively impacted all of our target metrics.

Experiment 4

Highlight partners, highlight reputation

One of they key research highlight was that users who connect to a Supplier during trial are 2.16x more likely to subscribe.

This insight drove us to build not only a whole new way of connecting to a suppliers, but also to workout with our supplier partners to reduce time to approve connection requests.

Evaluation - The 2nd most important action

Our analysis highlighted, that the 2nd most important user action that resulted in subscription was connecting to a supplier. However we had 2 problems, one being that suppliers took more than a day to respond to connection requests and secondly, the path to connect to a supplier in the app was not obvious.

fddfsdfs

A graph showing highest converting activation actions

Our analysis showed that only about 1.1% of all trial users ever connected with a supplier. On the contrary over 60% of subscribed users connected to a supplier.

Exploring a solution that highlighted suppliers and the value in connecting

The existing workflow for connecting to a supplier involved visiting the integrations menu, clicking on supplier integration and selecting the suppliers and connecting, which was basically hidden from the typical workflow for a user.

We strongly believed that this workflow had to be more discoverable and also be a part of users contextual journey. Hence we decided to introduce supplier interactions as a new dedicated UI component right where the users needed them, in the estimate costings tab.

This would not only suppliers more discoverable and easy to connect to, but it would also speak for our brands reputation by highlighting the rich partner ecosystem which none of our competitors could boast off.

1

Quick Access

The supplier drawer was made more discoverable and easy to access with a dedicated side panel

2

Curiosity that drive learning

The side panel also informed the user of the existence of catalogs and recipes which to many was foreign concept.

3

Builds trust through supplier partners

As an onboarding instrument, the side-panel with its list of popular brands tried to create instant trust among trialists.

4

Mobile Responsive

The new supplier drawer created an efficient interface for browsing and connecting with suppliers on mobile.

5

Progressive Disclosure

The design reveals information gradually, starting with broad categories and allowing users to drill down to more specific details

6

Persistent Search

The persistent search bar allowed users to quickly find specific items without having to navigate through multiple levels.

Results - A massive spike in new supplier connections, further uptick in conversions

We followed the similar tried and tested method of AB testing with an incremental feature roll out to trial users. We immediately saw a significant increase in supplier connect request, which was cumulatively to the tune of 3 times more than the old experience.

fddfsdfs

We also rolled it out to all existing users which resulted in a massive spike in new connection requests from our users. ( Notably our supplier partners were quite pleased with this, but initially signalled an alarm sighting spam activity )

fddfsdfs

Conclusion

Although we drove our goals of increasing connection requests, we also noticed that it did not materially improve the subscription metrics.

One of our working hypothesis is that perhaps our users don't care enough about the subscriber connections as we previously thought. Supplier pricing is less of a concern during estimating was another hypothesis. User interviews in this regard not very conclusive, but did shed some light into the hypothesis. We will need to run more tests in the future to conclusively support this argument.

Overall Impact of all experiments since September 2023

A clear co-relation was seen in the launch of the new estimates onboarding with templates and live pricing and its net impact on ARR. The sales team were convinced that users were activated, and further in the product before they were reached via a sales call.

This further gave the business the confidence to pursue a PLG strategy, with a no touch sales models for 10% of all sign ups, which will likely be our next experiment.

fddfsdfs

The USA ARR growth rate saw hockey stick growth starting right about the time of our second experiment launch

PLG Impact: 40% self-served

Although it was a small subset of users, we saw a significant shift in the number of no-touch users. The reduction in churn co-related with the timing of the release of the new supplier drawer.

fddfsdfs

Improvements in PLG

Key Learnings

  1. Avoiding Snapshot Metrics:
    Relying on snapshot-in-time metrics can be misleading, focus on historical trends for more accurate insights.


  1. Funnel Analysis Optimization:
    Going too broad or too specific in funnel analysis can lead to incorrect conclusions, Find the right balance in granularity and ensure correct measurement of dropoffs.


  1. Handling Failed Experiments:
    Managing failed experiments is challenging, especially in terms of communication. Establish a clear communication protocol for informing stakeholders and users about changes.Involve customer support early in the process to better explain changes to users.


  1. Balancing Metrics:
    Some experiments showed positive indications but negatively impacted guardrail metrics. Always factor in guardrail metrics to measure the success or failure of an experiment.


  1. Differentiating Between User Groups:
    Testing with trial users differs significantly from releasing to the existing customer base
    - Create distinct testing protocols for trial users and existing customers.
    - Implement a phased rollout strategy for major changes.


  1. Formalizing the Release Process:
    A formalized release process with proper checks is crucial. We developed a comprehensive release checklist and workflow.

  1. Avoiding Snapshot Metrics:
    Relying on snapshot-in-time metrics can be misleading, focus on historical trends for more accurate insights.


  1. Funnel Analysis Optimization:
    Going too broad or too specific in funnel analysis can lead to incorrect conclusions, Find the right balance in granularity and ensure correct measurement of dropoffs.


  1. Handling Failed Experiments:
    Managing failed experiments is challenging, especially in terms of communication. Establish a clear communication protocol for informing stakeholders and users about changes.Involve customer support early in the process to better explain changes to users.


  1. Balancing Metrics:
    Some experiments showed positive indications but negatively impacted guardrail metrics. Always factor in guardrail metrics to measure the success or failure of an experiment.


  1. Differentiating Between User Groups:
    Testing with trial users differs significantly from releasing to the existing customer base
    - Create distinct testing protocols for trial users and existing customers.
    - Implement a phased rollout strategy for major changes.


  1. Formalizing the Release Process:
    A formalized release process with proper checks is crucial. We developed a comprehensive release checklist and workflow.

  1. Avoiding Snapshot Metrics:
    Relying on snapshot-in-time metrics can be misleading, focus on historical trends for more accurate insights.


  1. Funnel Analysis Optimization:
    Going too broad or too specific in funnel analysis can lead to incorrect conclusions, Find the right balance in granularity and ensure correct measurement of dropoffs.


  1. Handling Failed Experiments:
    Managing failed experiments is challenging, especially in terms of communication. Establish a clear communication protocol for informing stakeholders and users about changes.Involve customer support early in the process to better explain changes to users.


  1. Balancing Metrics:
    Some experiments showed positive indications but negatively impacted guardrail metrics. Always factor in guardrail metrics to measure the success or failure of an experiment.


  1. Differentiating Between User Groups:
    Testing with trial users differs significantly from releasing to the existing customer base
    - Create distinct testing protocols for trial users and existing customers.
    - Implement a phased rollout strategy for major changes.


  1. Formalizing the Release Process:
    A formalized release process with proper checks is crucial. We developed a comprehensive release checklist and workflow.

  1. Avoiding Snapshot Metrics:
    Relying on snapshot-in-time metrics can be misleading, focus on historical trends for more accurate insights.


  1. Funnel Analysis Optimization:
    Going too broad or too specific in funnel analysis can lead to incorrect conclusions, Find the right balance in granularity and ensure correct measurement of dropoffs.


  1. Handling Failed Experiments:
    Managing failed experiments is challenging, especially in terms of communication. Establish a clear communication protocol for informing stakeholders and users about changes.Involve customer support early in the process to better explain changes to users.


  1. Balancing Metrics:
    Some experiments showed positive indications but negatively impacted guardrail metrics. Always factor in guardrail metrics to measure the success or failure of an experiment.


  1. Differentiating Between User Groups:
    Testing with trial users differs significantly from releasing to the existing customer base
    - Create distinct testing protocols for trial users and existing customers.
    - Implement a phased rollout strategy for major changes.


  1. Formalizing the Release Process:
    A formalized release process with proper checks is crucial. We developed a comprehensive release checklist and workflow.

Crafted with Code,

and