Remote conversion rate optimisers apply systematic experimentation and user behaviour analysis to increase the percentage of visitors who complete the actions a business cares about — purchases, sign-ups, trials, bookings, or any other measurable conversion goal — turning existing traffic into more revenue without increasing the cost of acquiring that traffic. The role is where data analysis, behavioural psychology, and UX design intersect with commercial outcomes.
What they do
Conversion rate optimisers conduct qualitative and quantitative research to identify the friction points, trust gaps, and confusion that prevent visitors from converting — using heatmaps, session recordings, funnel analysis, exit surveys, and usability testing to build a prioritised picture of where and why conversions are lost. They design and run A/B and multivariate tests — formulating hypotheses grounded in behavioural evidence, specifying test variations, setting up experiments in testing platforms (Optimizely, VWO, Google Optimize, Statsig), and running tests for statistically significant durations. They analyse test results — interpreting statistical significance and practical significance, avoiding the false positive traps of premature stopping or multiple comparison errors, and distinguishing genuine conversion improvements from random variation. They work with designers and developers to implement winning variations, ensure the production implementation matches the tested variant, and track post-launch performance to confirm the conversion lift persists. They manage the optimisation backlog — a prioritised queue of hypotheses ordered by expected impact and implementation cost — and report optimisation programme outcomes to marketing and commercial leadership.
Required skills
Strong statistical literacy — understanding of hypothesis testing, statistical significance, power analysis, confidence intervals, and the specific pitfalls of A/B testing (sample ratio mismatch, novelty effects, peeking, interaction effects between simultaneous tests) — is the analytical foundation that distinguishes rigorous CRO from undisciplined button-colour testing. Proficiency with web analytics (Google Analytics 4, Adobe Analytics, Mixpanel) for funnel analysis, segment identification, and the quantitative discovery of conversion problems. User research skills — the ability to run qualitative research (user interviews, usability testing, survey design, session recording analysis) that reveals the motivations, concerns, and confusion points that quantitative data alone cannot explain. Clear communication of test results and business impact to non-technical marketing and leadership audiences.
Nice-to-have skills
UX and visual design skills — the ability to produce or specify high-quality test variations rather than just instruct designers — for organisations where the CRO specialist is expected to own the creative and UX direction of test variants, not just the analysis. Experience with personalisation and segmentation platforms (Dynamic Yield, Monetate, Evergage) for organisations running segmented or personalised experiences alongside traditional A/B tests. Background with landing page optimisation for paid acquisition — the specific CRO challenge of optimising the message match, above-the-fold value proposition, and form design on campaign landing pages that receive high-intent traffic from paid search and social.
Remote work considerations
CRO is highly compatible with remote work — research, analysis, test design, stakeholder communication, and results reporting are all async activities. The collaborative dimension — working with designers and developers to implement test variations — works effectively through detailed written test specifications, annotated wireframes, and async design review processes. Remote CRO specialists typically invest in strong documentation of their testing methodology and hypothesis backlog, both to maintain programme continuity and to communicate the rigour of the experimentation programme to stakeholders who are not close to the day-to-day work. The real-time test monitoring dimension requires access to analytics dashboards from any location, which is standard with modern analytics and testing platforms.
Salary
Remote conversion rate optimisers earn $80,000–$130,000 USD at mid-level in the US market, with senior CRO specialists and CRO managers at large e-commerce and SaaS companies reaching $145,000–$200,000+. European remote salaries range €55,000–€95,000. High-volume e-commerce companies (where a 1% conversion rate improvement translates to significant revenue), SaaS companies with free trial or freemium conversion challenges, financial services companies with online application conversion funnels, and digital marketing agencies with CRO service offerings pay at the upper end.
Career progression
Digital analysts, UX designers with quantitative skills, and performance marketers who develop experimentation depth move into CRO roles. From CRO specialist, the path runs to senior CRO specialist, CRO manager, head of optimisation, and director of growth or product. Some CRO practitioners move into growth product management, into broader product analytics leadership, or into digital consulting where CRO is a client-facing service.
Industries
E-commerce companies (where conversion rate directly determines revenue per session and return on ad spend), SaaS companies with self-serve sign-up and free trial conversion funnels, travel and hospitality booking platforms, financial services companies with online application and account opening funnels, and digital marketing agencies running CRO programmes for multiple clients are the primary employers.
How to stand out
Demonstrating specific conversion improvements with documented test results and revenue impact — the checkout flow test that increased purchase conversion by X%, the landing page variant that improved trial sign-up rate from X% to Y%, translating to $X in additional annual revenue — positions CRO as a measurable commercial function. Being specific about the testing infrastructure you built — the hypothesis documentation system, the statistical rigour standards, the test prioritisation framework — shows programme management depth. Remote candidates who demonstrate a systematic, documented testing methodology — hypothesis templates, sample size calculators in use, interaction effect controls — show the rigorous experimental discipline that distinguishes high-quality CRO from naive button-colour testing.
FAQ
What is statistical significance and why does it matter for A/B testing? Statistical significance (typically set at 95% — meaning p < 0.05) is the probability threshold below which we reject the null hypothesis that the control and variant perform equally. When a test reaches 95% statistical significance, there is at most a 5% probability that the observed difference occurred by chance — that is, we are wrong 1 in 20 times we call a result significant. Statistical significance matters because without it, CRO teams risk implementing changes based on random variation rather than genuine improvement. The most common mistake is "peeking" — checking results daily and stopping the test when significance is reached early — which dramatically inflates the false positive rate, because random fluctuation will produce a significant-looking result at some point in almost any test if you check enough times. Proper testing requires setting the sample size in advance based on a power analysis and running the test to completion.
What is the difference between A/B testing and multivariate testing? An A/B test compares one variation against the control — isolating the impact of a single change (a different headline, a different button colour, a different CTA). A multivariate test (MVT) simultaneously tests multiple variations of multiple page elements — for example, 3 headline options × 2 image options × 2 CTA options — to identify the best-performing combination and understand interaction effects between elements. A/B testing is simpler, faster (requiring less traffic to reach significance), and more interpretable. MVT is more powerful for understanding element interactions but requires significantly more traffic to reach significance for all combinations. In practice, most CRO programmes rely primarily on A/B testing for its speed and interpretability, and reserve MVT for high-traffic pages where the interaction effects between elements are significant and the team has validated that sufficient traffic is available.
How do you prioritise a CRO backlog? Using a scoring framework that combines impact and effort. The most widely used is the PIE framework: Potential (how much improvement is possible — pages with low conversion rates relative to similar pages have more potential), Importance (how much traffic and revenue does the page drive — a 2% improvement on the homepage matters more than a 2% improvement on a low-traffic deep page), and Ease (how difficult is the test to implement — a copy change is easier than a checkout flow redesign). Each hypothesis is scored across these three dimensions and ranked by total score. More sophisticated prioritisations also account for confidence (how strong is the qualitative and quantitative evidence that the hypothesis addresses a real problem) and test duration (how long will the test take to reach significance given current traffic — a test that takes 6 months to complete has a lower effective priority than its score suggests).