Chapter 01 · Story

Plans evolved. We paused, pressure-tested, and refreshed.

Q1 2026 was a productive but mis-shaped quarter against the original 90-day plan. Three structural realities changed what we shipped — and produced two of this engagement's most important deliverables along the way. This dashboard walks through the story: where we were, what we found, and where we're going.

Where we were
4 goals · 6 pillars · 90 days
The Mar 2026 strategy committed Synopsys to +30% organic traffic, +40% leads, AI search leadership, and SEO strength through the Ansys merger — operationalised through six pillars and an aggressive 90-day Q1 kickstart.
What we discovered
Threat axis rotated 90°
Cadence is stable alongside us in EDA. The real movement: Intel, NVIDIA and AMD gaining 13–21 points in our application-domain adjacencies. Two of the most important Q1 deliverables — the AI Visibility methodology rebuild and the Tiered Asset Strategy — weren't on the original plan.
Where we're going
57 actions · 3 windows · 5 weeks to Processor IP
A refreshed strategy that holds the four FY26 goals, refines Goal 3 and 4, resequences the pillars, moves Automotive into Q2, reframes the Steering Committee ask to a deliberate hands-on/handoff triage between SD's hands-on AEM work and SD-to-dev packaged handoff, and operationalises the Tiered Asset Strategy for the June 1 Processor IP close.
02The framing that runs through everything
Read this first if you only have time for one section. Frames every section that follows.

The integrated search reality — one shifted environment, two measurement frames

Bottom line
AI search and traditional search now feed each other through one flywheel. What looks like two scoreboards is one engine. Goal 1 reframing, the Tiered Asset Strategy, the 119-page citation queue, and the Phase 1 schema work only make sense when read across both frames at once.

The temptation when reading a search-strategy review in 2026 is to treat AI search and traditional search as two separate problem areas, with two separate sets of investments and two separate scoreboards. The data tells a more useful story: there is one shifted search environment expressing itself through two measurement frames. Goal 1 reframing, the Tiered Asset Strategy, the 119-page citation queue, and the Phase 1 schema work only make sense when those two frames are read together.

Traditional Authority AI Citations in Answers Brand Search Demand Indexing & Engagement

One engine, turned by the same investments

AI engines cite high-authority indexed pages — so traditional SEO authority is the input to AI visibility.

AI mentions push buyers toward branded queries on Google — so AI visibility is an input to traditional brand search volume.

Branded query growth deepens indexing and dwell signals, which feeds back into the corpus AI engines draw from. The same flywheel runs in both directions, and the same investments turn it. What looks like two scoreboards is one engine.

Brand demand
+41% YoY
Branded query clicks growing while non-brand is flat — AI mentions are funnelling buyers to brand search.
AI Overview pressure
~50% CTR cut
Same rank, half the clicks on informational queries. Citation Share is now the metric that matters.
Citation Share
5.2% 9.0%
Priority cluster lift — measurable AI visibility growth on the cluster where Synopsys has deepest traditional authority.
Citation queue
119 pages
Already-ranked pages not cited in our Priority 100 measurement (3 engines, 76 days) — 105,751 organic clicks, 13M impressions waiting to be turned into citation surface.

1. Brand traffic +41% while non-brand is flat

Traditional frame
-25% YoY non-brand clicks (GSC). Reads like demand decay.
AI frame
+60% AI Visibility (Writesonic, Mar→Apr 2026). New branded entry pattern.
Read together: Demand isn't falling — it's rotating from non-brand discovery to branded direct intent, mediated by AI answers.

2. AI Overview compresses CTR at unchanged rank

Traditional frame
Same rank, ~50% fewer clicks on informational queries.
AI frame
The AI Overview answer above the rank is absorbing the click.
Read together: Ranking and traffic have decoupled. Citation Share now decides whether informational traffic exists at all.

3. Citation Share 5.2% → 9.0% on the priority cluster

Traditional frame
Cluster pages with the deepest indexed authority.
AI frame
Same pages were the first AI engines reached for.
Read together: Traditional authority is predictive of AI citation. The 119-page queue is the next compounding lever.

4. Divestiture authority leakage is dual-frame

Traditional frame
~2,300 monthly clicks lost (lidar -939, photonics -461, etc.).
AI frame
Same pages are the citation surface AI engines reach for on adjacent queries.
Read together: The Tiered Asset Strategy is a dual-purpose defence — one decision preserves both the click stream and the citation surface.

5. Schema Phase 1 templates — dual payoff

Traditional frame
Cleaner Google parsing, rich-results eligibility, indexing clarity.
AI frame
AI Overview eligibility and LLM ingestion clarity improve in parallel.
Read together: Branden's stable templates land twice on the scoreboard. Hreflang and robots.txt cleanup behave the same way — integrated-search infrastructure, not classical SEO.

6. Peer authority gap compounds across both frames

Traditional frame
Cadence ~23 adobe.com referring links vs Synopsys ~5; similar gaps elsewhere.
AI frame
LLMs preferentially cite high-authority sources — same gap, AI-side cost.
Read together: The PR/earned-media ask closes the same authority deficit on both frames simultaneously. It's not "more links for SEO" — it's a doubled-up effect.
What this means for measurement

Why Goal 1 needs to be measured as a composite KPI

Original Goal 1 — +30% YoY non-brand organic traffic — was a sensible target in a world where non-brand traffic was the primary expression of search demand. In a world where brand search is +41% YoY, AI Overview compresses informational CTR by ~50%, and AI citation share is itself a measurable demand signal, that single number is no longer a faithful picture. The composite KPI proposed in this document — non-brand organic clicks + AI citation share growth on priority clusters + branded query volume growth — isn't a softer goal. It's a more honest one. Each of its three components captures one expression of the same underlying demand, and movement in any one of them is interpretable in light of the other two.

03The starting point
The Mar 2026 strategy as committed — included so the rest of the document is anchored against it.

The original 2026 strategy — what was committed

The canonical 2026 strategy was set in March. It targeted four FY26 goals, structured the work around six pillars, and committed to an aggressive 90-day Q1 kickstart. This section is the "before" picture — what the strategy looked like before we'd lived through Q1 and gathered the data that came back.

Four FY26 goals

1
Grow Search Traffic — expand visibility in US, EMEA and APAC. Target: +30% organic traffic YoY (+1.24M monthly clicks).
2
Get More Leads from Search — attract new audiences and convert them into sales opportunities. Target: +40% organic leads YoY.
3
Show Up in AI Search — maintain #1 leadership across ChatGPT, Claude, Gemini, Perplexity, Microsoft Copilot.
4
Maintain SEO Strength Through the Ansys Merger — don't lose traffic during the website migration.

Six pillars supporting the goals

1.AI Visibility / GEO
Build 360-prompt tracking framework. Apply GEO best practices to top 10 cited pages.
2.Content Cluster Build
12 clusters total. Q1: launch Digital Twin and Data Centers. Q2-Q3: Auto, Manufacturing, Security IP.
3.Technical SEO Remediation
26,000+ issues. Q1: robots.txt, top 500 redirect chains, schema markup, mobile nav.
4.Divestiture Digital Equity
Develop Tiered Asset Strategy framework. Apply to upcoming Processor IP divestiture.
5.Glossary / Reference Content
94 pages baseline (96% rank #1). Continue scaling. Identify gaps.
6.Cross-Functional Integration
Content review cadence, PR coordination, schema dev support, monthly committee meetings.

The 90-day kickstart commitments

Tracking setup (Wk 1-2): SE Ranking 1,726 keywords, AI visibility baseline (Writesonic 360 prompts target), reporting dashboards.

Technical fixes (Wk 1-12): robots.txt audit (Wk 1) · Schema App for AEM (Wk 2-6) · top 500 redirect chains (Wk 1-8) · duplicate content (Wk 4-10) · mobile nav (Wk 6-12).

Content actions (Wk 1-12): 5 quick wins · 5 high-priority Silicon IP gap pages · 10 GEO-optimised pages on Verification cluster.

Q1 net-new content target: 15 pages (5 Silicon IP, 3 Automotive, 2 Verification, 5 Others).

04How we approached this review
Five sequential steps; each step's deliverable is linked back into the document below.

A five-step review process — anchored in evidence

Before refreshing the strategy, we paused at the end of Q1 to test the assumptions underlying it. Both assumptions about the competitive landscape (now testable with real AI visibility data) and assumptions about delivery shape (now refinable based on the experience of executing in Q1).

"It's been a busy and productive first quarter of 2026. As we all know, plans evolve, so we'd like to set aside some time to review our SEO strategy, make sure we're focusing on the right priorities, and align on a clear 30/60/90-day plan." — Stephan Marais, 2026-04-28 (the trigger for this review)
0
Source Inventory
Map every input
1
Pressure-Test
Verify findings independently
2
Q1 Retrospective
Reconstruct delivery vs plan
3
Priority Refresh
Resolve 14 open questions
4
30/60/90 Plan
Operationalise into actions

Click any step above for detail

Each step had a specific purpose and produced a specific deliverable. Together, the five deliverables became the analytical foundation for the refreshed strategy.

Three principles guided the review: anchor every claim in evidence the client can verify · distinguish observed facts from recommended interpretations · treat the original strategy with respect.
05The data picture
The Apr 2026 measurement reading in plain language — AI Visibility, Citation Share, the competitive picture, and the topic-by-topic gaps.

Where Synopsys stands today

Our 76-day measurement window (Feb 4 → Apr 21) covered 22,313 individual AI-generated answers across three AI engines. The headline: Synopsys is the clear AI visibility leader — but the topic-level picture reveals where the real Q2 work needs to go.

Data sources powering this review
Six independent datasets, triangulated.
All exports live in the project workspace · re-runnable on demand
Writesonic — Priority 100
22,313 AI answers, 76-day window, 3 engines (AI Overviews, Gemini, ChatGPT). Source for AI Visibility, Citation Share, Share of Voice. Not measured: Perplexity, Microsoft Copilot — closing the gap is the rationale behind the Brand Radar evaluation in the 60-day window.
Google Search Console
16-month trajectory of clicks, impressions, CTR, ranking position. Brand and non-brand split. The ground truth on Google search demand.
SEMrush — Rankings + backlinks
All-rankings export Sept 2025 → Apr 2026 + backlink profile. EDA peer authority comparison — Cadence, Siemens, Ansys, Keysight.
SE Ranking — 645 keywords
The Quick Wins keyword set. Daily ranking position with quality-of-rank metrics. Reconciled against GSC and SEMrush.
SEO Status Sheet
Synopsys-internal action tracker. Source for what was committed vs. what shipped — anchored to email and sync-note evidence.
Backlink authority profile (PDF)
Domain authority and referring-domain breakdown. Quantified the Cadence/Siemens/peer authority gap that compounds across both search frames.
The Priority 100 prompt set · what we're tracking
99 prompts across 11 commercial topics — derived from Synopsys's 3,214-keyword strategic universe.
Apr 2026 export · Writesonic · Google AI Overviews + Gemini + ChatGPT
99 prompts shown · 11 topics
# Topic Intent Prompt
Brand narrative · what AI engines are actually saying
Beyond "are we cited" — the vocabulary AI engines use when they describe Synopsys.
5,648 phrases extracted from 22,313 AI answers · 27 themes · Apr 2026
What this section is. Citation Share answers "are AI engines reaching for our pages?" The narrative view answers a different question: "what are they actually saying when they reach for them?" Writesonic runs the Priority 100 prompts through three engines, captures the natural-language answers, and extracts the specific phrases the engines use to describe Synopsys, its products, and the EDA space. Those extracted phrases — labelled "Keywords" in the platform — are not search queries; they are concept-level vocabulary that AI engines naturally reach for in their answers. Each carries a sentiment score (0–100) and a mention count, and rolls up into one of 27 brand-attribute themes.
Prompts
The 99 questions we test the engines with ("What are the best EDA tools for chip design?"). Listed in the panel above.
Writesonic Keywords
The phrases AI engines use in their answers about us ("Design Compiler logic synthesis", "steep learning curve challenges"). 5,648 of these.
Search keywords (SEO)
The queries users type into Google. Tracked separately via GSC, SEMrush, SE Ranking. Different platform, different metric.
Total phrases tracked
5,648
Extracted across 22,313 AI answers
Mentions across answers
83,600
Total times these phrases appeared
Positive vs Friction
5,464 vs 184
Sentiment ≥ 50 vs < 50
Themes covered
27
Brand-attribute categories
Where AI engines spend their airtime
Themes ranked by total mentions. Three themes — Core Functionality, Feature Variety, and Competitive Differentiation — soak up two-thirds of all AI airtime about Synopsys (65.7%). The themes at the bottom (Customer Feedback, Service Responsiveness, Ease of Use) are dimensions where the brand is functionally invisible to AI narrative. Where airtime is thin, the AI vocabulary doesn't yet exist — content has to create it.
What AI engines say positively — top mentions
The most-frequently-used positive phrases are mostly product names and capability claims. The pattern is clear: AI engines anchor positive Synopsys narrative on specific products (Design Compiler, Sentaurus TCAD, DSO.ai, VCS, Fusion Compiler), category authority ("major EDA vendors", "industry-standard RTL synthesis tools"), and quantified market position ("dominant market share over 55%"). This is engine-level brand recognition working in Synopsys's favour.
What AI engines say negatively — friction points
These are the perceptions and barriers AI engines surface in their answers. Two things to notice: (a) the friction is structurally honest — most of these reflect genuine EDA-industry-wide perceptions (steep learning curve, long compile times, high cost) rather than Synopsys-specific failings; (b) they are addressable through content — onboarding-flavoured, ROI-framing, time-to-productivity content gives AI engines new vocabulary to reach for instead.
Top 15 positive mentions
By count · sentiment ≥ 50
Most-spoken phrases AI engines use to describe Synopsys positively. Numbers = times the phrase appeared across the 22,313 answers.
Top 12 friction points
By count · sentiment < 50
The negative phrases AI engines surface. These are smaller in count (184 total negative phrases vs 5,464 positive) but tactically informative for content strategy.
What this means strategically
Three uses for the narrative view, beyond what Citation Share already tells us.
1Defend the narrative. 5,464 positive vs 184 negative is a strong overall position. The negative phrases cluster on cost / complexity / learning curve — industry-wide perceptions, addressable through tutorial, ROI, and time-to-productivity content that creates citable phrases AI can reach for instead.
2Fill the narrative gaps. Themes with low keyword counts (Customer Feedback, Service Responsiveness, Ease of Use) are dimensions where Synopsys is functionally invisible to AI engines. If the brand wants to be associated with strong support or accessible onboarding, content has to be produced that creates new citable vocabulary in those themes.
3Read engines differently. The same brand has different narratives across engines — Gemini anchors heavily on DSO.ai (345 mentions), ChatGPT pulls more product-feature-flat (Sentaurus, RTL synthesis, VCS), Google AI Overviews leans toward IP portfolio framing. This can inform engine-specific content prioritisation if Synopsys decides to optimise per platform once 5-engine coverage lands.
AI Visibility
48.8%
9.75pts ahead of Cadence
Citation Share
42.4%
2.7× Cadence's rate
Share of Voice
26.7%
Stable across window
Sentiment
97%
Positive when mentioned
Competitive leaderboard · AI Visibility & Citation Share
Synopsys leads — and the citation gap is even larger than the visibility gap.
Priority 100 · 76-day window
Synopsys
48.8
42.4
Cadence Design Systems
39.1
15.6
Siemens EDA
23.6
14.0
Intel
18.2
3.3
NVIDIA
12.8
2.7
AMD
10.9
2.2
TSMC
9.0
1.7
Arm
8.9
3.1
Ansys
7.0
2.4
Keysight Technologies
3.7
6.2
AI Visibility (% of answers mentioning brand)
Citation Share (% of answers citing brand's pages)
The citation share gap is wider than the visibility gap. Synopsys's 42.4% citation share is 2.7× Cadence's 15.6% — a much larger lead than the 9.75pt visibility gap suggests. When AI engines reach for a *source* to cite, they reach for Synopsys far more often than for any peer. This is the higher-quality signal and the one most directly improvable by the GEO retrofit programme.

Topic-level picture — where wins and gaps live

Where Synopsys is strong (top 4 topics in purple), the strategy is working. The bottom 5 topics in magenta are where chip-makers are gaining and the Synopsys+Ansys Full Stack story is currently absent.

Synopsys AI Visibility · 11 topics
From 90.9% dominance in EDA down to 6.6% near-absence in Automotive.
Priority 100 · 76-day window
Electronic Design Automation
90.9%
Silicon Design
61.9%
Semiconductor IP Solutions
59.4%
Verification
54.8%
Silicon IP
48.7%
Manufacturing ↓-17.7pt
45.4%
AI & Machine Learning
40.0%
Multi-Die System Integration
39.4%
Security IP
38.2%
HPC & Data Center
22.2%
Automotive CRITICAL
6.6%
Where the Q2 work concentrates. Defend the top four (existing content production continues). Recover Manufacturing (DFM Synopsys+Ansys joint content). Build the bottom five — Auto, HPC, Security IP, Multi-Die, AI/ML — as Synopsys+Ansys joint clusters where the Full Stack story is currently absent. NVIDIA dominates Automotive at 26.8%; Intel leads HPC at 55.0%; AMD leads HPC at 49.2%.

The 119-page GEO citation queue

119 pages on www.synopsys.com get more than 250 organic clicks each over the measurement window and were not cited in any of the 22,313 AI answers we measured. Combined: 105,751 organic clicks and 13M Google impressions. Each can be GEO-retrofitted (Quick Answer headers, Q&A blocks, schema markup).

The 119-page funnel · how we got there
From the GSC top-1,000 to a 119-page priority queue.
≥ 250 organic clicks · 76-day window
1,000 GSC top pages
Maximum GSC export size for the property in the 76-day window.
1,000
starting set
In-scope pages on www.synopsys.com
Subdomains careers / solvnet / investor / news excluded — not part of the marketing AI optimisation surface.
864
in scope
Pages not cited in our Priority 100 measurement
≥50 clicks. Not cited on Google AI Overviews / Gemini / ChatGPT in the 22,313 answers tracked. The full opportunity surface.
631
≥ 50 clicks
Priority retrofit queue (the headline 119)
≥ 250 clicks each. 105,751 combined clicks, 13M impressions. The recommended Q2 production target.
119
≥ 250 clicks
Quick-win tier within the 119
≥ 500 clicks. Highest urgency / fastest payoff retrofit candidates.
65
≥ 500 clicks
Why 119 and not 631. The ≥250-clicks threshold is tight enough to feel manageable for a Q2 production line yet substantive enough to move the citation surface. At 5 retrofits/week, 65 ship in the 90-day window. Pattern within the 119: 81 are /blogs/ pages — exactly the format AI engines like to cite — making the blogs the cleanest sub-list to attack first.
Trajectory update · May 2026
SEMrush SERP-features cross-check tells a sharper story than the snapshot above. On the 119 priority pages, Google AI Overview citations have doubled in 7 months — from 6.6% to 13.8% of their ranking keywords (Sept 2025 → April 2026). About 47% of these pages are already in AIO on the majority of their keywords; the Priority 100 prompt set was missing them. The retrofit programme isn't earning citations from scratch — it's accelerating a trajectory that's already running, plus addressing the smaller set of genuine flat-trajectory pages.
SEMrush AIO citation trend · 7 months
Synopsys's citation share inside Google AI Overviews is accelerating across every measurable layer.
Sept 2025 → April 2026 · www.synopsys.com
AI Overview presence (SERP coverage)
35% → 61%
Share of Synopsys-ranking keywords whose Google SERP shows an AI Overview. Nearly doubled in 8 months.
Synopsys cited IN AIO (property-wide)
2.3% → 6.2%
Share of all SEMrush-tracked keywords where synopsys.com is the cited source IN the AI Overview. Tripled.
Q119 in AIO (the 119-page queue)
6.6% → 13.8%
Same metric, restricted to the 119 priority pages. Doubled in 7 months — without retrofit work.
Three buckets, three plays. The retrofit programme splits into: Deepen (~56 pages already in AIO on a majority of their keywords — push from ~60% to ~90% citation depth), Close (small set where AIO fires on the SERP but Synopsys is absent — heaviest retrofit work), and Build (flat-trajectory content pages like autonomous-driving-levels at 14% — targeted retrofit to trigger AIO firing).

The top 10 priority targets within the 119 (April 2026 GSC snapshot):

Two measurement frames AI Visibility for Synopsys is best read across buy-intent measurement (Writesonic Priority 100 — narrow strategic prompt set, captures consideration-set positioning, e.g. "Which companies provide the best ADAS chip solutions?") and cluster-coverage measurement (SEMrush SERP Features — broad keyword universe, captures educational/informational dominance). Synopsys looks different on each: e.g. Automotive is 6.6% on Writesonic (consideration-set weak — NVIDIA wins) but 14.9% on SEMrush (educational layer winning). Both true; both useful.
Clicks Impressions CTR URL Topic
5,986 1.7M 0.35% /glossary/what-is-a-battery-management-system.html Glossary
5,640 991K 0.57% /blogs/chip-design/autonomous-driving-levels.html Automotive
2,000 783K 0.26% /glossary/what-is-autonomous-car.html Automotive
1,820 78K 2.33% /glossary/what-is-serdes.html Silicon IP
1,665 29K 5.77% /manufacturing/quantumatk.html Manufacturing
1,450 244K 0.59% /glossary/what-is-universal-flash-storage.html Silicon IP
1,427 13K 10.72% /ai/ai-powered-eda/videos.html AI-Powered EDA
1,232 212K 0.58% /glossary/what-is-an-rlc-circuit.html Silicon IP
1,230 380K 0.32% /glossary/what-is-wiring-harness.html Automotive
1,165 150K 0.78% /glossary/what-is-a-photonic-integrated-circuit.html Photonics
06The second lens — Traditional organic search
The same shifted environment, read through GSC. Brand vs non-brand, AI Overview pressure, and the wins underneath the headline.

Brand, non-brand, and the AI Overview structural pressure

The strategy refresh as originally scoped was AI-search-centric. A subsequent analytical pass integrated 16 months of Google Search Console actual click data, 8 months of SEMrush positions, the SE Ranking 645 tracked keywords, the Top 20 Quick Wins audit, the Status Sheet, and the SEMrush backlink overview. The picture this produced reframes Goal 1 materially.

The brand vs non-brand split — three trajectories in one click metric

Single click numbers conflate three structurally different segments. Disaggregated, the picture looks very different. Last 3 months YoY:

Brand vs non-brand · YoY clicks
Demand is rotating — not falling.
Last 3 months · GSC actuals
Brand traffic
+41.0%
151,333 vs 107,329 clicks YoY · Impressions +129% · CTR 9.41% (was 15.30%). Post-Ansys brand demand showing up in the data.
Non-brand top-1000 clicks
+0.3%
Essentially flat (53,000 vs 52,852). Rankings improving — clicks not following.
Non-brand top-1000 impressions
+160.5%
SERP appearances 2.6× — but CTR halved 2.17% → 0.84%. AI Overviews are absorbing the click above the link.
The single click metric mixes these three. Brand traffic is growing strongly. Non-brand clicks are holding flat under massive AI Overview pressure. Total property clicks down −25.4% — but that decline lives in the long tail below the GSC top-1000 cap (intentional pruning + OSG divestiture + AI Overview compounding). The composite KPI in the refresh captures what the single number can't.

The structural cause — AI Overview presence has nearly doubled

AI Overview presence
35% → 61%
Sept 2025 → Apr 2026 on Synopsys-ranking keywords
Average position
20.7 → 7.2
Jan 2025 → Apr 2026 — major ranking gain
Monthly impressions
+54%
7.8M → 12.0M (Jan 2025 → Apr 2026)
CTR (effectively halved)
3.36% → 1.32%
Same period — AI Overview pattern

Authority profile — Synopsys vs EDA peers

SEMrush snapshot 2026-05-04. Authority Score is stable at 52 — the click decline is structural AI Overview pressure, not authority erosion.

Domain Top backlinks Referring domains Authority Score
synopsys.com 4.1M 28.6K 52
cadence.com 5.4M 19.6K 54
siemens.com (EDA subdomain) 7.4M 65.3K 70
ansys.com 2.8M 23.4K 54

Concrete PR/media gap: Synopsys has 5 adobe.com backlinks (vs Cadence 23); 5 apple.com (vs Siemens 79); 7 bbc.co.uk (vs Siemens 417). The PR integration ask in the Steering Committee already proposed; backlink data sharpens the target list.

Goal 1 reframing — composite KPI replacing single click target

The original +30% YoY click target conflates three trajectories pulling in three directions. The reframe is a composite KPI that respects the segmentation:

Sub-metric What it measures Current trajectory
Brand traffic Health of the brand surface +41% YoY clicks (strong)
Non-brand impressions + CTR resilience Health of non-brand SEO surface; tracks rankings AND CTR resilience Impressions +160%; CTR halved; clicks flat (+0.3%)
AI citation share Cross-cuts to Goal 3 Synopsys 48.8% — leader
Average position Leading indicator of SEO foundations 20.7 → 7.2 (strong)
Long-tail health Net keyword count; intentional pruning vs unintended decline KW count down 16.4%; mostly intentional + AI Overview compounding
Total clicks (deprioritised) Historical reference; expected to decline structurally −25% YoY (long tail dominates)

The Status Sheet context

187 of 192 SEO/Content tasks marked "To start" (only 5 Done). This isn't a discovery — it's the gap that triggered this strategy review. The original strategy committed to a task volume that exceeded agency-side resource available, especially as client priorities concentrated on DT/DC and the Ansys migration discovery absorbed capacity. The refreshed 57-action 30/60/90 is the response. The 192-task list should be archived as superseded.

07What actually happened in Q1
Pillar by pillar — what shipped, what slipped, and the two unplanned deliverables that turned out to matter most.

The Q1 reality check, pillar by pillar

The 90-day plan committed to specific deliverables. Q1 reality landed differently — three structural realities re-shaped capacity. Click any pillar to see what was committed and what actually happened.

1
AI Visibility / GEO MeasurementMethodology rebuilt — apples-to-apples baseline now in place
Q1 commitment

Build custom 360-prompt tracking framework. Apply GEO best practices to top 10 cited pages.

Actual Q1 status

Methodology rebuilt to Priority 100. 100 prompts in production; 260 queued for Phase 2. Engine coverage: 3 of 5 (Google AI Overviews, Gemini, ChatGPT). Three analytical memos delivered. Page-level GEO retrofit work deferred to Q2 in favour of methodology rebuild.

2
Content Cluster BuildDT and DC active; expanded scope; sequencing diverged
Q1 commitment

Launch Digital Twin and Data Centers clusters. 15 net-new pages (5 Silicon IP, 3 Auto, 2 Verification, 5 Others). 10 GEO-optimised Verification pages.

Actual Q1 status

DT/DC strategies developed and revised. 18 pieces total in DT+DC (expanded from 8+7=15). Several pieces in internal review. Physical AI emerged as third active cluster. Mar 24 sync decision: 7 of 12 clusters had stakeholder feedback to proceed; remaining 5 deferred.

3
Technical SEO RemediationSubstantially advanced — schema unlock pending dev cycle
Q1 commitment

robots.txt (Wk 1), Schema App for AEM (Wk 2-6), top 500 redirect chains (Wk 1-8), duplicate content (Wk 4-10), mobile nav (Wk 6-12). 21,384 alt-text gaps + 1,257 hreflang issues queued for Q2-Q3.

Actual Q1 status

Robots.txt deployed week of 2026-02-05 (Keri-confirmed). Redirect chains: 1,786 → <1,000 by 2026-02-18. Schema App tool retired; direct JSON-LD Phase 1 templates delivered 2026-04-01. Hreflang dev handoff Dec 2025 + Jan 2026 (17-URL de-de removal). Noindex/nofollow audit done 2026-02-02. Breadcrumbs in dev pre-engagement. Alt-text unmoved. Mobile nav and duplicate content status TBC via Branden ping.

4
Divestiture Digital Equity ProtectionFramework delivered; Processor IP June 1 imminent
Q1 commitment

Develop Tiered Asset Strategy framework. Apply to upcoming Processor IP divestiture.

Actual Q1 status

Framework delivered 2026-03-26 in "Strategic SEO Impact of Divestitures" document. 357 URLs leaking authority quantified. Processor IP timeline: 2026-06-01 (5 weeks). Newly surfaced 2026-04-14: TPT/Sabre/Simpleware Ansys-products August divestiture — second tier-mapping exercise required for Q3.

5
Glossary / Reference ContentSteady — but format reframe needed
Q1 commitment

Continue scaling. Identify gaps. (94-page baseline, 96% rank #1, 3,578 avg clicks/page.)

Actual Q1 status

Baseline reaffirmed. "What is an Electronics Digital Twin" and "What is Physical AI" glossary drafts in production. 40 zero-AI high-traffic glossary pages identified as expansion targets via the GSC×AI overlay. Strategic reframe: AI Overviews structurally consume the click — glossary is now a citation/E-E-A-T asset, not a click asset.

6
Cross-Functional IntegrationForum stood up — SD presentation pending Stephan's return
Q1 commitment

Establish content review cadence, PR coordination, schema dev support, monthly committee meetings.

Actual Q1 status

Biweekly SEO sync running. AEO/GEO Steering Committee formed 2026-03-06; first meeting 2026-03-30. SD presentation pending Stephan's return 2026-05-04. Separate weekly subdomaining migration call established. SharePoint adopted as canonical file repo.

Three above-the-line wins — not on the original plan

These outputs emerged from doing the planned work and discovering it needed a different shape. They're strategic wins, not scope creep.

Above-the-line Win #1

AI Visibility methodology rebuild

The Feb 2026 baseline had structural coverage gaps. SD rebuilt to a custom 100-prompt set derived from Synopsys's 3,214-keyword universe. Without this, the false "-9.6% decline" narrative would still be the working story. Every subsequent strategic decision rests on this dataset.

Above-the-line Win #2

Tiered Asset Strategy framework

Born from investigating 357 URLs leaking authority to Black Duck and Keysight. The canonical playbook for Processor IP, TPT/Sabre/Simpleware, and any future divestiture. Three tiers, ready for proactive application before the next redirects go live.

Above-the-line Win #3

Schema POC + Best Practices Guide

Branden's mid-quarter pivot from the third-party Schema App tool to direct JSON-LD templates. 12 production-ready templates with 0 Schema.org validation errors, plus a Best Practices Guide for deferred templates informed by competitive audit of Ansys, Siemens, Cadence, MathWorks, Intel.

08The pressure-test
Seven net-new findings from April — independently recomputed before any of them shaped the refreshed strategy.

Seven findings, independently tested

Bottom line
6 of 7 findings hold. 3 need important refinement before driving strategy decisions. 1 (Schema as highest-leverage technical investment) is not data-supported in the existing analysis — it rests on best-practice judgement rather than Synopsys-specific evidence.

Before propagating findings into a refreshed strategy, we tested whether each one survives independent recomputation. Six hold; three need important refinement; one is unverified by data. Click any finding to see the evidence and the strategic implication.

Finding 1
The competitive threat axis has changed
Holds — stronger than stated High confidence
"Cadence stable; Intel/NVIDIA/AMD gaining 13–21pts in our adjacencies."
Evidence

Brand-level trend over 76 days: Synopsys -2.04 (stable), Cadence -0.59 (stable), Intel +13.1%, NVIDIA +13.6%, AMD +20.8%. We tested whether the gains are concentrated in Synopsys-weak topics: Intel skews +19.9pts toward weak topics, NVIDIA +31.8pts, AMD +19.9pts. Cadence skews -48.8pts (concentrated in EDA-strong, where Synopsys is). The threat axis has rotated 90°.

Strategic implication

The chip-makers have already overtaken Synopsys in HPC and AI&ML. NVIDIA dominates Auto. The competitive frame in the original strategy ("Cadence is the AI threat") was correct for the EDA-tools fight but blind to the application-domain fight. Goal 3 needs to extend from "maintain leadership" to "extend leadership into adjacencies."

Finding 2
A concrete 119-page optimisation backlog exists
Holds (counts verified) Framing needs caveat
"119 high-traffic pages not cited in the Priority 100 measurement — 105,751 organic clicks of upside."
Evidence

864 GSC top pages on www.synopsys.com (in scope) — 233 with ≥1 AI citation, 631 with zero. Of those 631, 119 have ≥250 clicks each = 105,751 clicks. Verified.

Strategic implication

The 105,751 should be characterised as "the click volume associated with these pages today" — not "upside". GEO optimisation produces AI citations, not necessarily click recovery. The 119-page list becomes the cleanest, most actionable Q2 production target. The /blogs/ pattern (81 of 119 are blogs — a format AI engines like to cite) is the highest-leverage sub-list.

Finding 3
Automotive is more critical than the strategy assumed
Holds with critical nuance High confidence
"6.6% AI Vis, NVIDIA dominant, three top zero-AI pages are Automotive."
Evidence

Automotive Synopsys AI Visibility: 6.6% (lowest of 11 topics). NVIDIA dominant at 26.8%. Three top zero-AI pages are Auto cluster: autonomous-driving-levels (5,640 clicks), what-is-autonomous-car (2,000), what-is-wiring-harness (1,230). Half of 8 Automotive prompts return Synopsys absent entirely.

The nuance that changes the recovery path

NVIDIA dominates mention rate at 26.8%. But on citation share — which brand AI engines cite as a source — Ansys is #1 at 15.81%, ahead of Synopsys (13.91%) and well ahead of NVIDIA (6.93%). The recovery path isn't to compete with NVIDIA on brand mentions (PR/marketing terrain). It's to increase the rate at which AI engines reach for joint Synopsys+Ansys Automotive technical authority — exactly the joint-cluster Full Stack treatment.

Finding 4
Glossary winning Google but losing the click to AI
Partially holds — implication wrong High confidence
"Format produces AI citations beautifully but suffers brutal CTR collapse."
Evidence

AI-cited glossary median CTR: 0.63%. Zero-AI glossary median CTR: 0.52%. Essentially the same bad CTR. All AI-cited pages (across page types) median CTR: 0.97%. All zero-AI: 2.16%. AI citation correlates with click loss, not recovery.

Strategic implication — the reframe

The "double down vs pivot" framing is the wrong question. Clicks are gone regardless of GEO investment — AI Overviews structurally consume the click for "what is X" queries. The right reframe: Glossary is now a citation-driven E-E-A-T asset, not a click-driven SEO asset. Reposition internally before Q2 KPIs are set. Measurement framework: AI citation share (primary), impression share (secondary), ranking position (tertiary), clicks (deprioritised).

Finding 5
Schema markup is the highest-leverage technical investment
Unverified by data Low confidence on "highest"
"AI engines depend on schema heavily; deployment is the gating factor."
Evidence (or lack of it)

None of the existing memos test the schema-leverage claim against Synopsys-specific data. The claim is consistent with general GEO best practice but not anchored. From the data we have, two technical investments would compete with schema for "highest leverage": (a) information architecture for AI consumption (Quick Answer blocks, Q&A sections) — cheaper than schema deployment; (b) internal linking to surface the 81 zero-AI blog posts.

Strategic implication

Schema is valuable; the relative ranking is unverified. Step 3 should pull the Technical Audit + Schema docs to test against data. Reframe the Steering Committee ask from "dev capacity for schema" to a deliberate hands-on/handoff triage — schema deployment goes via the SD-to-dev handoff path (already proven for redirects, hreflang, schema templates), while SD's hands-on AEM/VM time is spent on SEO-judgement edits where it has the most leverage.

Finding 6
Synopsys+Ansys Full Stack narrative is differentiated
Holds — recognition inverted High confidence
"No competitor can match the chip-to-system Full Stack story."
Evidence

AI engines mention Synopsys+Ansys together in 1,474 of 22,313 answers (6.6%) — 1.93× higher than independence would predict. Recognition exists. But the pattern is inverted: 23.6% co-mention in EDA (where Synopsys is dominant alone), near-zero in HPC (0%), Security IP (0%), AI&ML (0.9%), Auto (2.5%) — exactly where the pairing would create competitive separation from chip-makers.

Strategic implication

The Full Stack rollout logic should reverse. Don't apply Full Stack to EDA topics where AI engines already pair them and it's irrelevant. Apply it to weak topics — Auto, HPC, AI/ML, Security IP, Multi-Die — where the pairing is currently absent. Every spoke in those clusters should explicitly co-author the Synopsys silicon perspective with the Ansys system simulation perspective.

Finding 7
Measurement coverage is incomplete (3 of 5 engines)
Holds — engine list verified 95% claim unanchored
"Writesonic covers 3 of 5 engines named in FY26 Goal 3."
Evidence

Verified: Writesonic tracks Google AI Overviews, Gemini, ChatGPT. Perplexity and Microsoft Copilot are not in measurement. The README's "95% of B2B technical search" claim is unanchored — no citable source.

Strategic implication

Two-pronged: Short-term (30-day) — update FY26 Goal 3 wording to honestly reflect 3-engine reality. Don't quietly carry the 5-engine framing. Medium-term (60-90-day) — scope Brand Radar (Ahrefs) addition for Perplexity + Copilot coverage. Decision via Steering Committee.

09The three big strategic shifts
Threat axis · Synopsys+Ansys story · Hands-on/handoff triage. The three structural realities that reshape the work for Q2.

Three things look materially different now

Bottom line
The threat axis is rotated. The competitive game now lives in two different battlefields with two different competitor sets. The Synopsys+Ansys story is being recognised in the wrong topics. Apply Full Stack where the pairing is currently absent. The execution constraint is SD's hands-on AEM time, not Synopsys dev capacity. Reframe as a deliberate hands-on / handoff triage.

These three shifts emerged from the pressure-test and the retrospective. Each one changes a priority decision that the 30/60/90 plan operationalises.

1
The competitive threat axis has rotated 90°

The Mar 2026 deck framed Cadence as the primary AI visibility competitor. Cadence is in fact stable alongside Synopsys at 39% AI visibility — a peer in EDA tools, not a closing threat.

The actual competitive movement is Intel, NVIDIA, and AMD gaining 13–21 percentage points in Synopsys's application-domain adjacencies (Automotive, HPC, AI/ML). NVIDIA dominates Auto at 26.8%. Intel leads HPC at 55.0%. NVIDIA leads AI/ML at 45.4%.

The chip-makers have already overtaken Synopsys in three of our weak topics. This is a categorically different competitor set entering Synopsys's adjacencies — chip-makers winning AI mindshare in topics where chip-to-system authority is the differentiator. The competitive monitoring framework should split into two competitor sets, with different KPIs per set.

Topic concentration: average AI Visibility by topic group
Where each brand's visibility lives. Cadence and Synopsys concentrated in EDA-strong topics; chip-makers concentrated in Synopsys-weak topics.
2
The Synopsys+Ansys "Full Stack" story is recognised in the wrong topics

AI engines already pair Synopsys and Ansys in EDA (23.6% co-mention rate), where Synopsys is dominant alone and the pairing adds no competitive separation.

The pairing is near-zero in HPC (0%), Security IP (0%), AI&ML (0.9%), Automotive (2.5%) — exactly the topics where the Full Stack story would create real differentiation from chip-makers, because the chip-makers don't have Ansys's multiphysics simulation breadth.

The cluster rollout logic should reverse. Don't apply Full Stack to topics where AI engines already pair them. Apply it to weak topics where the pairing is absent. Every spoke in Auto, HPC, AI/ML, Security IP, Multi-Die clusters should explicitly co-author the Synopsys silicon perspective with the Ansys system simulation perspective.

Synopsys + Ansys co-mention rate by topic
Pairing recognised in EDA-strong topics (top); absent in weak topics (bottom, red) where it would actually differentiate.
3
Technical execution is bound by SD's hands-on AEM/VM time pattern, not dev headcount

As part of the SOW, Somebody Digital is responsible for hands-on technical SEO fixes — repairing in-line links pointing to 404s and redirects, meta data changes, page copy and element edits, and similar work that goes through Synopsys's AEM CMS. This work is performed by SD's technical SEO team via VM-mediated AEM access.

The VM is materially slower than a standard machine — but more importantly, all work must happen inside the VM. Research, drafting, and planning prepared on a standard machine cannot be copy-pasted in. Every step has to be redone inside the VM. This compounds the time per fix and consumes SD's hands-on capacity faster than is sustainable.

By contrast, where SD packages well-documented technical specs for the Synopsys dev team to implement directly — redirect loop fixes, the hreflang Technical Advisory + 17-URL de-de removal scope, the Schema Phase 1 templates with production-ready JSON-LD — the model works at its best. Akash Verma's same-day acknowledgement on hreflang ("we are already addressing the high-priority issue you identified") is the dev-handoff path operating well.

The reframe: instead of "dev capacity for schema," propose a deliberate hands-on/handoff triage — SD hands-on (via AEM/VM) for SEO-judgement content edits where copy adjustments are needed; SD packages → Synopsys dev implements for scaled deterministic technical changes; joint validation post-deployment. The previous agency was criticised for only directing without doing — SD's SOW correctly commits to genuine implementation, which we keep — the triage just spends SD-hands-on capacity where it has the most leverage.

The hands-on / handoff split
SD hands-on (slow)
In-line link fixes, meta data changes, page copy edits, on-page schema additions where copy needs adjusting
VM compounds time per fix
Dev-handoff (effective)
Redirect rules, hreflang generation, schema template deployment, breadcrumbs, robots.txt
Akash-style same-day acknowledgement
Joint
Initial validation of dev-implemented changes — SD verifies live, dev iterates
The Q2 Steering Committee ask: agree the triage, expand the dev-handoff path where it makes sense, surface batch-deployment options that could reduce VM compounding.
10The refreshed strategy
FY26 goals reframed where the data demanded it; pillar priorities resequenced; the Steering Committee ask sharpened.

Four FY26 goals — two unchanged, two refreshed

The four FY26 goals from the Mar 2026 strategy hold in their overall shape. Goals 1 and 2 are unchanged. Goals 3 and 4 are refreshed to match what the data and the engagement scope actually support — without altering anything that would require executive re-approval.

1
Grow Search Traffic
Refreshed (Phase D)
Original wording

Expand visibility in US, EMEA and APAC. Target +30% organic traffic YoY (+1.24M monthly clicks).

Refreshed wording

Original target (+30% YoY clicks) doesn't survive the brand vs non-brand split: brand traffic +41% YoY (strong); non-brand top-1000 clicks flat (+0.3%) despite +160% impressions (AI Overview pressure halving CTR); long tail shedding. Reframe as composite KPI — brand traffic, non-brand impressions + click capture rate, AI citation share, average position, long-tail health. Total clicks tracked but deprioritised. Concrete execution surface unchanged: 119-page GEO citation queue + cluster production. FY27 to formalise.

2
Get More Leads from Search
Inherits Goal 1 reframe

Attract new audiences and convert into sales opportunities. Target +40% organic leads YoY.

Concrete execution surface for Q2

Downstream of Goal 1 plus content quality. Inherits Goal 1's composite KPI reframe.

3
Show Up in AI Search
Refreshed
Original wording

Maintain #1 leadership across ChatGPT, Claude, Gemini, Perplexity, Microsoft Copilot.

Refreshed wording

Maintain Synopsys's #1 AI visibility leadership in EDA-tools topics where it is currently dominant; extend leadership into application-domain adjacencies (Auto, HPC, AI/ML, Security IP, Multi-Die) where chip-maker competitors are gaining ground. Currently tracking 3 of 5 engines via Writesonic; evaluate Brand Radar (Ahrefs) for Perplexity + Copilot coverage by Q2.

4
SEO Strength Through Ansys Merger
Refreshed
Original wording

Don't lose traffic during the website migration.

Refreshed wording

Position Synopsys.com for the eventual Ansys subdomain landing under the separate migration SOW: technical readiness (schema markup expressing the Synopsys-Ansys relationship; redirect chain remediation; mobile/CWV improvements); content alignment ("Ansys, part of Synopsys" co-branding integration where applicable); divestiture digital equity protection (Tiered Asset Strategy applied to Processor IP June 1 and TPT/Sabre/Simpleware August).

Six pillars — refreshed priority order for Q2

The pillars themselves are unchanged. The priority sequence is refreshed based on what Q1 taught us. Pillar 4 is now top priority because of the Processor IP June 1 hard date.

1
Pillar 4 — Divestiture Digital Equity Protection
Time-critical. Processor IP closes June 1 (5 weeks). TPT/Sabre/Simpleware August. The framework is ready; application is the urgent work.
TIME-CRITICAL
2
Pillar 6 — Cross-Functional / Steering Committee
Unblocking forum. Most actions across other pillars are downstream of the SC. SD presentation pending Stephan's return 2026-05-04. The four asks cascade through the rest of the plan.
UNBLOCKING
3
Pillar 3 — Technical SEO
Partially advanced. Schema is the largest remaining unlock. Mobile nav and duplicate content status need Branden ping. The hands-on/handoff triage agreed via SC drives how schema deployment proceeds.
SCHEMA UNLOCK
4
Pillar 2 — Content Cluster Build
Resequenced — Automotive moves into Q2 alongside DT/DC. Synopsys+Ansys joint-cluster pattern applies. Manufacturing follows in 60/90. HPC + AI/ML enter 90-day planning.
RESEQUENCED
5
Pillar 1 — AI Visibility
Methodology-robust; execution pivot. Apply GEO best practices to the 119-page backlog. Hold Phase 2 prompt expansion pending engine coverage decision.
EXECUTION PIVOT
6
Pillar 5 — Glossary
Recategorised as citation/E-E-A-T asset, not click asset. New KPI framework. Expansion focused on weak topics where citation magnets help recovery.
RECATEGORISED

Three strategic positions worth standing up

Beyond goals and pillars, the work surfaced three operating positions worth setting as standing patterns rather than one-off decisions.

Position 1

Synopsys+Ansys joint-cluster pattern as standard for weak topics

Every cluster in Auto, HPC, AI/ML, Security IP, Multi-Die is scoped as a joint cluster — Synopsys silicon perspective + Ansys system simulation perspective in pillar pages and spokes. Not Synopsys-only, not Ansys-only.

Position 2

Glossary as citation/E-E-A-T asset, not click-driven SEO asset

Click-based KPIs no longer fit — AI Overviews structurally consume the click for "what is X" queries. Glossary stays valuable as a citation magnet. Measurement framework shifts to AI citation share, impression share, ranking position.

Position 3

The 119-page GEO citation queue is the highest-leverage Q2 execution

119 high-traffic pages not cited in our Priority 100 measurement (3 engines, 76 days) = 105,751 organic clicks of pages our AI dataset isn't reaching for. Each can be GEO-retrofitted. At 5/week we ship 65 in Q2. The /blogs/ pattern (81 of 119) is the cleanest sub-list. The retrofit case stands across engines — best practice for AI ingestion is engine-agnostic.

10bWhere co-authoring with Ansys lifts both scoreboards
Per-topic Synopsys vs Ansys comparison across two measurement frames. Sorts into four strategic buckets — Co-author lift, Synopsys-only-lend, Ansys-carries-borrow, Ansys-only-territory.

Synopsys + Ansys topic co-authoring map

Computed from Writesonic Priority 100 (buy-intent, 76-day window) and SEMrush organic content footprints (~950 keywords per domain, April 2026). The data sorts cleanly into four strategic buckets — and the Automotive row is the one that should change Q2 cluster sequencing.

Topic Synopsys Visibility Ansys Visibility Combined Citation Share Strategic Bucket
Electronic Design Automation 90.93 23.63 39.97 Co-author lift
Silicon Design 61.93 18.01 50.24 Co-author lift
Multi-Die System Integration 39.43 10.60 40.78 Co-author lift
Semiconductor IP Solutions 59.36 0.07 39.15 Synopsys-only / lend
Verification 54.80 4.49 51.01 Synopsys-only / lend
Silicon IP 48.66 0.60 46.42 Synopsys-only / lend
Manufacturing 45.41 1.75 52.35 Synopsys-only / lend
AI & Machine Learning 40.01 0.90 38.83 Synopsys-only / lend
Security IP 38.20 0.00 48.24 Synopsys-only / lend
HPC & Data Center 22.15 0.00 29.51 Synopsys focus
Automotive 6.59 5.75 29.72 Ansys carries / borrow
Multi-Physics Simulation (SEMrush only) ~0 28K traffic Ansys-only territory

The Automotive insight

The only topic in the entire 11-topic, 11-brand set where Ansys's Citation Share (15.81%) exceeds Synopsys's (13.91%). Ansys is the more-cited authority when AI engines do cite a source on Automotive prompts. Combined Citation Share doubles to 29.7%. SEMrush organic data confirms it: Ansys has 2,655 traffic on Automotive content (DFMEA, powertrain, HUD, NVH, EV powertrain, lidar) — system-side automotive content that Synopsys does not own. Synopsys has 7,881 traffic on chip-side (BMS, ADAS, autonomous, ASIL). Joint co-authored Automotive content combines both halves of an end-to-end automotive narrative AI engines currently see in fragmented pieces.

The four strategic buckets

Co-author lift — 3 topics
EDA, Silicon Design, Multi-Die System Integration. Both brands have meaningful AI footprint. Joint pillar pages and integration explainers lift the combined citation footprint above either alone. Multi-Die is the strongest evidence-based co-author opportunity outside the dominant topics: chip design (Synopsys) + package thermal/signal integrity (Ansys) — Cadence has no Ansys equivalent.
Synopsys-only / lend — 6 topics
Sem IP, Verification, Silicon IP, Manufacturing, AI & ML, Security IP. Synopsys established (38–59% visibility); Ansys absent (0–4%). Synopsys hosts Ansys content on these topics to broaden Ansys reach without diluting Synopsys positioning.
Ansys carries / borrow ★ — 1 topic
Automotive. Synopsys borrows Ansys's existing automotive citation authority through joint-product positioning. Highest-leverage retrofit play in the deck. NVIDIA, Intel, AMD, TSMC, Arm are the brands AI cites today; Synopsys+Ansys joint entity has a credible displacement path.
Ansys-only territory — 1 topic (SEMrush only)
Multi-Physics Simulation: CFD, FEA, combustion, electromagnetism, HFSS. Ansys 28K monthly organic traffic, 39% AIO firing. Synopsys near-zero. Outside the EDA-flavoured Writesonic Priority 100 today; Phase 2 prompt expansion should probe this category to put it on both measurement frames.
Strategic implication: the Synopsys+Ansys "Full Stack" pattern is the standard treatment for weak-topic clusters. Co-author-lift topics get joint pillars; Synopsys-only-lend topics get Ansys-hosted content on Synopsys.com; Ansys-carries-borrow (Automotive) gets the highest-leverage joint retrofit work; Multi-Physics gets prompt-expansion-driven inclusion in the buy-intent measurement frame. Full per-topic methodology and per-cluster action recommendations in the companion analytical document at 05_AI_Visibility/Synopsys_Ansys_Topic_Strength_Comparison_2026-05-04.md.
11The 30/60/90 plan
57 actions across May, May–June, and June–July. Every one tagged with owner, AEM access requirement, success criterion, and due date.

The portfolio — Now, Next, Later, Always-On

Bottom line
Q1 taught us: a 90-day plan treated as a closed system over-commits. The work doesn't stop at 90 days. The portfolio shows the full arc — what's committed for Q2 (Now), what's planned for Q3 (Next), what's recognised for Q4 / FY27 (Later), and what runs continuously underneath all of them (Always-On).

Now is the operating commitment — 57 dated actions with owners and success criteria. Next is the planned Q3 deliverable list, scoped but flexible. Later is the recognised work for Q4 and into FY27 — visible so it doesn't surprise us, not committed yet. Always-On is the perpetual cadence that runs underneath everything. The matrix below is the whole picture in one view.

Methodology · how the portfolio works
Four horizons, four levels of commitment.
All six pillars × all four horizons
Each pillar has work in every horizon. The framework lets the Steering Committee sequence and trade off rather than approve / reject each item. If a Next item needs to come forward, something in Now moves out — the resource shape stays honest. If something new comes up, we can place it in the right horizon based on capacity. Items marked "out of current SOW" appear in Later for visibility — those exist as separate engagements (Ansys subdomain migration, etc.) and aren't committed in this plan.
Now
Q2 · May–Jul 2026
Committed work, dated and owned. Hard accountability.
Next
Q3 · Aug–Oct 2026
Planned deliverables, owners identified, dates flexible.
Later
Q4 / FY27 setup
Recognised work for visibility. Not committed yet.
Always-On
Continuous
Cadences and recurring work. No fixed horizon.
Now · Q2 · May–Jul 2026
57 dated actions across May, May–June, and June–July
Click any action for detail · filter by pillar / AEM
Every action has primary owner, supporting owners, AEM access requirement, success criterion, dependencies, and due date. This is the operating commitment. Three critical-path chains and the Q2 success test live below.
Pillar:
AEM:
57 of 57 actions shown
30-day window
May 2026
2026-04-30 → 2026-05-30
Unblock the Steering Committee, secure Processor IP readiness, set up production lines.
60-day window
May–June 2026
2026-05-30 → 2026-06-29
Land Processor IP cleanly. Begin AEM Batch 1. First Automotive content. Brand Radar decision.
90-day window
June–July 2026
2026-06-29 → 2026-07-29
Schema Batch 2. Phase 2 prompt expansion. Cluster expansion. August divestiture readiness.

Three critical-path chains

Processor IP readiness: 4.1 4.6 4.7 — June 1 hard date
Steering Committee unlock: 6.1 6.3 3.3 3.5 — Schema Batch 1 deployment is downstream
GEO retrofit production line: 1.3 1.4 1.7 1.9 — cumulative 50-65 pages by 90-day

The single test for Q2 success

"By the end of June, has Processor IP been redirected cleanly under the Tiered Asset Strategy on June 1, has AEM Batch 1 schema deployed by late June via the dev-handoff path, has the first Automotive spoke published as a Synopsys+Ansys joint piece, have 10 pages of the 119-page GEO citation queue been retrofitted, has the Goal 1 reframing brief landed at the Steering Committee with the composite KPI accepted in principle, has Adobe Analytics access been secured (or formally acknowledged as a blind spot), and has the Steering Committee formally accepted the hands-on/handoff triage, content review cadence, monthly cadence, and brand vs non-brand monitoring framework?"

If yes, the refresh has converted into shipped work.

Next · Q3 · Aug–Oct 2026
Planned deliverables, owners identified, dates flexible
Deliverable-level scope · per pillar
These are the Q3 commitments — scoped at the deliverable level, with primary owners identified, but date-banded rather than dated. Anything in Now that slips lands here. New work surfacing during Q2 finds its home here based on capacity. The Q3 plan firms up at the end of Q2's last Steering Committee.
What firms up Q3: the post-Processor IP signal capture (4.13), the Brand Radar / 5-engine decision (1.6), the Steering Committee meeting #3 (6.7) where Q3 priorities are formally endorsed, and the Q2 portfolio review deciding what carries forward.
Later · Q4 2026 / FY27 setup
Recognised work — visible, not committed
Theme-level · for visibility and capacity planning
Work the Q1 review surfaced as necessary but that doesn't fit Now or Next. Listed at the theme level rather than the deliverable level — the point of Later is visibility, not commitment. Items move from Later into Next as capacity opens. Out-of-current-SOW items are listed for completeness — those are recognised dependencies that exist as separate engagements (e.g. Ansys subdomain migration), not work this plan owns.
Out of current SOW · listed for visibility only
These are recognised cross-impact dependencies. They exist as separate engagements or have been retired — visible here so the broader picture is honest, not committed in this plan.
  • Ansys-to-Synopsys subdomain migration execution (separate SOW since 2026-03-10; ~September 2026 target)
  • Migration Phase 1–3 risk assessment, Phase 4A/4B scope (separate SOW)
  • Schema App for AEM tool deployment (retired Q1 in favour of direct JSON-LD; could be revisited if scale requires)
  • Unified Cookie ID workstream / Adobe Launch consolidation (separate workstream)
  • Server load testing of Synopsys infrastructure for Ansys content (separate SOW)
  • Ansys-side content audit or competitive analysis (separate SOW)
Always-On · continuous
The cadences and recurring work running underneath every horizon
Perpetual · no horizon
These don't belong in a horizon — they're the operational rhythm that runs underneath all of them. Naming this band explicitly stops it competing with sprint commitments for capacity. Every monthly status pack, every SC meeting, every cluster publish, every backlink monitor sit here. Q1 over-committed in part because this band wasn't visible.
Why this band matters. Always-On is roughly 30–40% of agency-side capacity in any given month. Without it being visible, sprint plans assume 100% capacity is available for sprint work — and that's how 187 of 192 Q1 tasks ended up "to start."
12What we need from Synopsys
The asks back to you, the open items that would sharpen the plan, and the ten risks we've planned around.

Decisions, open items, and risks

Bottom line
Two levels of approval, not one. At the portfolio level, the Steering Committee validates the shape — is the right thing in Now, what should move from Later into Next, what's missing. At the item level, the five most time-sensitive asks below unlock the 30-day Now window. Both conversations matter, and they're different.

The plan lands defensibly without these, but each one would tighten Now actions where it pertains. The risks are logged with mitigations to keep the plan adaptive.

The five most time-sensitive asks
What we need from Synopsys to unlock the 30-day window.
In priority order · everything else lives below
1
Schedule the Steering Committee meeting for the week of 2026-05-04 onward — anchors the entire 30-day window. Slip risks Action 6.1 → 6.3 → 3.3 → 3.5 chain.
Decision · this week
2
Confirm Processor IP scope — Tier 1 commercial only, or does the divestiture include educational content? Determines Tombstone Notice scope before June 1.
Confirmation · 24–48h
3
Adobe Analytics access (or formal acknowledgement that we'll continue without it). Closes the conversion-side blind spot behind the Goal 1 reframing — the highest-leverage measurement step in Q2.
Approval · Q2
4
Branden status pings — mobile navigation status (Q1 commitment Wk 6–12, not surfaced) and duplicate content remediation (Q1 commitment Wk 4–10, not surfaced). Either confirm shipped or surface the blocker.
Status · 24–48h
5
Brand Radar evaluation budget — approval in principle to assess upgrading from 3-engine to 5-engine AI tracking. Affects the Phase 2 prompt expansion path in 60–90 days.
Approval · 60-day

Items where Synopsys input would sharpen the plan

  • Branden ping: mobile navigation status (Q1 commitment Wk 6-12; not surfaced in Q1 communications). 24-48 hour response.
  • Branden ping: duplicate content remediation status (Q1 commitment Wk 4-10; not surfaced).
  • Branden confirmation: Processor IP scope is Tier 1 only? Or does it include educational content (which would expand the Tombstone Notice scope)?
  • TPT / Sabre / Simpleware August scope: URL footprint, retained/divested split, expected close date.
  • Steering Committee meeting date (week of 2026-05-04 onwards) — anchors the entire 30-day window.
  • Brand Radar evaluation budget approval in principle.

What this plan explicitly does NOT include

  • Ansys-to-Synopsys subdomain migration execution (separate SOW since 2026-03-10)
  • Migration Phase 1-3 risk assessment, Phase 4A/4B scope (separate SOW)
  • Schema App for AEM tool deployment ($15-25K/year line item — retired in Q1)
  • Unified Cookie ID workstream / Adobe Launch consolidation (separate workstream)
  • Server load testing of Synopsys infrastructure for Ansys content (separate SOW)
  • Ansys-side content audit or competitive analysis (separate SOW)

10 risks logged with mitigations

R1
Processor IP scope expands beyond Tier 1
Probability: Medium · Impact: High
Branden ping (Action 4.1) is the early signal; Tombstone templates flexible; 30-day window has slack.
R2
SC presentation slips past week of May 4-8
Probability: Low · Impact: High
Cristiano can lead if needed; Chris can support; Keri has structure approved already.
R3
AEM Batch 1 deployment slips past 60-day
Probability: Medium · Impact: Medium
Hands-on/handoff triage in 30-day surfaces slip risk early; reorder priority within window if needed.
R4
TPT/Sabre/Simpleware August scope larger than expected
Probability: Medium · Impact: Medium
90-day window has scoping action; pre-work begins early.
R5
Synopsys+Ansys SME bandwidth overcommitted
Probability: Medium · Impact: Medium
Joint cluster pattern means SMEs review fewer but more meaningful pieces; staggered cadence per Action 6.6.
R6
Brand Radar approval declined
Probability: Low · Impact: Low
Step 3 already framed this; FY26 Goal 3 wording update accommodates either outcome.
R7
Internal Synopsys migration introduces fresh Pillar 3 disruption
Probability: Low · Impact: Medium
Crawl cadence captures it; SD has track record of fast turnaround (Feb 13 → Feb 18 example).
R8
Ansys subdomain migration timing pulls forward
Probability: Low · Impact: High
Out of scope here, but cross-impact items (schema for relationship; divestiture work) need to be ready earlier.
R9
Content review workflow bottlenecks at SME approval
Probability: Medium · Impact: Medium
Establish 5-business-day SLA for SME reviews; surface stuck items in monthly SC.
R10
Steering Committee monthly cadence not honoured
Probability: Low · Impact: High
Calendar locked early; Keri owns convening; SD prepares pack 1 week in advance.
AFor the record · Appendix
The dated timeline of decisions, deliverables, and quoted moments that anchor the retrospective.

Q1 timeline — selected milestones

A reference rail. Useful for re-reading any other section against the actual sequence of events.

2025-08-22
Original SD proposal to Branden after the August 21 presentation. Project effectively begins September.
2025-10-22
Synopsys SEO Kick-Off deck delivered.
2025-11-04
Keyword Taxonomy Sync. SNPS Taxonomy provided by Branden.
2025-12-02
Technical Audit meeting. Branden: "For breadcrumbs, we've made a development story for this already (Wrike id 4325015169)."
2026-01-07
Hreflang Technical Advisory delivered to Akash Verma. Akash same-day: "We are already addressing the high-priority issue you identified."
2026-01-13
First biweekly sync. Strategy first draft complete.
2026-01-27
SEO/GEO Strategy presentation to Synopsys (the canonical Mar 2026 deck content).
2026-02-02
Branden completed noindex/nofollow audit changes.
2026-02-05
Keri: "Hi Elma and team, The updated Robots.txt was deployed this week. Please take a look when you get a chance."
2026-02-18
SD delivered fresh redirect chain crawl. Branden: "So we now have less than 1k on the site?" Stephan: "Yes, that seems to be the case based on the latest crawl."
2026-03-10
Cristiano sent updated SOW with Phase 4 to Anish. Formal moment migration becomes a separate engagement.
2026-03-24
All internal redirect loops fixed. 27 AEM templates list provided by Keri. Cluster strategy expanded to broader top-level coverage.
2026-03-26
SD delivered "Strategic SEO Impact of Divestitures" — Tiered Asset Strategy framework birth.
2026-03-30
First AEO/GEO Steering Committee meeting.
2026-03-31
Branden's schema approach pivot: "I want to change our approach... let's start with a simple POC using the stable templates we already have."
2026-04-01
SD delivered Schema Templates Phase 1 + Best Practices Guide.
2026-04-14
Biweekly sync. Migration data 37GB collected. Processor IP tracking 2026-06-01. TPT/Sabre/Simpleware August surfaced.
2026-04-21
Writesonic Priority 100 export taken (final Q1 dataset).
2026-04-28
Steering Committee deck approved by Keri. Keri: "This looks like a great outline for the presentation! I'm excited to move forward with this." Meeting awaiting scheduling.
2026-04-30
This Q1 review begins.