VP of Architecture
Position Overview
The VP of Architecture is the most senior technical individual contributor on hardware, responsible for leading the end‑to‑end chipset roadmap from concept through tape‑out. This role will architect the central processor at the heart of the silicon platform, design the high‑performance memory fabric, and collaborate closely with system software teams to ensure the hardware fully unlocks the performance of the large‑scale inference stack.
Key Responsibilities
▸ Define and own the multi‑generation SoC roadmap: core selection, AI accelerator integration, and high‑speed interconnects for data center‑class workloads.
▸ Architect the fusion processor core with workload‑specific extensions for LLM inference, cache acceleration, and secure cryptographic operations.
▸ Design a scalable memory and storage fabric supporting tiered DRAM plus expandable multi‑terabyte memory pools for elastic cache management.
▸ Lead microarchitecture reviews, set performance‑power‑area (PPA) targets, and drive cross‑functional alignment across system software, security, and product teams.
▸ Evaluate and select EDA tools, manufacturing partners, and IP vendors; manage relationships with key ecosystem partners and external design houses.
▸ Represent the company’s technical vision to large cloud and platform partners during requirements‑gathering and proof‑of‑concept engagements.
▸ Establish architectural standards and design guidelines for the entire engineering organization.
▸ Mentor and grow the architecture team as the company scales from founding phase through subsequent funding stages.
Required Skills & Experience
▸ 15+ years in SoC architecture with at least one successful tape‑out on a data‑center or AI inference chip.
▸ Deep RISC‑V ISA expertise: core microarchitecture, custom instruction extensions, privilege levels, and familiarity with the broader open‑source ecosystem.
▸ Expert‑level knowledge of modern CXL‑class protocols: memory expander types, coherency semantics, and fabric topologies for disaggregated memory pools.
▸ Strong DRAM/DDR5‑generation memory subsystem expertise: channel interleaving, timing optimization, and multi‑rank/DIMM configurations.
▸ Familiarity with near‑data / in‑memory processing architectures applied to cache‑intensive or vector database workloads.
▸ Outstanding communication skills; proven ability to present complex trade‑offs to executive‑level and major cloud‑partner audiences.
Preferred Qualifications
▸ Experience at semiconductor or systems companies building solutions for AI/ML or high‑performance compute workloads.
▸ Published work or patents in memory subsystem architecture, coherent interconnects, or AI accelerator design.
▸ Experience collaborating with large cloud or hyperscale infrastructure teams on hardware/software co‑design programs.
- Locations
- California