Tanvi Biyani
Business Strategy & Operations

Tanvi Biyani

I drive business operations by turning ambiguity into structured execution—and building the systems teams actually run on.

I work across data, workflows, and product to bring clarity, speed, and ownership to how organizations operate and scale.

Dance Music Art Travel New People Cooking Dog Mom Dance Music Art Travel New People Cooking Dog Mom
CAO Data Lake → From Scattered Silos to a Unified Enterprise Foundation
What was breaking

Data across the Chief Administrative Office was distributed across eight independent sub-lines of business, each maintaining its own datasets, storage locations, and structures.

There was no centralized way to bring this data together. Cross-functional analysis required manual coordination and reconciliation, making it slow, inconsistent, and difficult to scale.

Every time someone needed a cross-LOB view, it became a custom effort — limiting the organization's ability to operate with a unified view of its data.

What I noticed

The challenge wasn't a lack of data — it was a lack of alignment.

Each sub-LOB had valuable operational data, but definitions, structures, and ownership varied widely. Without standardization, even a centralized platform wouldn't solve the problem on its own.

The approach

The organization was moving toward building a centralized data lake on Databricks to address fragmentation.

My role focused on ensuring that as data was brought into the platform, it was aligned, usable, and reflected how the business actually operated — not just how it existed in source systems.

What I did
  • Partnered with stakeholders across all eight sub-LOBs to identify and prioritize data assets for onboarding
  • Documented existing data sources, structures, and ownership to define ingestion scope
  • Translated business data requirements into clear user stories for engineering teams
  • Defined how disparate datasets should be standardized and aligned as they moved through the lake (e.g., Bronze → Silver layers)
  • Clarified business definitions and relationships between datasets to ensure transformation logic matched operational reality
  • Worked closely with engineering to ensure the data model would support downstream analytics and future AI/ML use cases

I acted as the bridge between business and engineering — aligning multiple teams on what "correct" looked like before data was scaled across the platform.

What changed
  • Established a clear and prioritized onboarding plan across eight sub-LOBs
  • Created alignment on data definitions, structures, and ownership ahead of ingestion
  • Reduced ambiguity around how datasets should be standardized within the data lake
  • Enabled engineering teams to begin building against well-defined, business-aligned requirements

At the time I transitioned off the team, data onboarding was actively underway — with a much clearer path from fragmented source systems → a unified and scalable data foundation.

Access Management → From Email Chaos to a Scalable Intake System
What was breaking

Access requests for dashboards were handled entirely over email.

Requests were sent, forwarded, and buried across long threads, with no consistent way to track status or ownership. Stakeholders often had to follow up multiple times just to understand where things stood.

As volume grew to dozens of requests each week across multiple dashboards and teams, this wasn't just inefficient — it became a governance risk. There was no audit trail, no SLA tracking, and no reliable way to ensure every request was actually completed.

What I noticed

The issue wasn't responsiveness — the team was actively working through requests.

The problem was that email wasn't built to support operational workflows. Requests had no defined lifecycle, which made tracking, prioritization, and accountability inconsistent.

The call I made

Instead of trying to improve how we managed email, I decided to remove it from the process entirely.

I pushed for a centralized intake system where every request would be captured, tracked, and owned from submission through completion.

What I did

I approached this as designing an intake and execution system, not just improving a process.

  • Mapped the full lifecycle of an access request to identify breakdown points
  • Standardized how requests were submitted by introducing a structured intake form
  • Built a centralized workflow to track requests, assign ownership, and enforce SLAs
  • Introduced end-to-end visibility so both the team and stakeholders could track progress in real time
  • Automated routing and status updates to reduce manual follow-ups

The focus wasn't just tooling — it was creating a consistent operating model for how access requests were handled.

What changed
  • 100% of requests were captured and tracked
  • 0 missed requests after implementation
  • Faster turnaround due to clearer ownership and reduced back-and-forth
  • Full audit visibility and improved governance controls

More importantly, we moved from inbox-driven execution → a system with clear ownership, visibility, and control.

Performance Intelligence → From Manual Reports to Scalable Sales Analytics
What was breaking

Sales performance data existed across multiple datasets, but there was no centralized way to make it actionable.

Managers had no reliable way to evaluate their teams across key dimensions like Pipeline, Revenue, Balances, Client Engagement, and Risk & Controls. Identifying top and bottom performers required manual effort, and there was no consistent way to benchmark individuals against their teams.

At the same time, detailed performance reporting was restricted to managers and distributed manually. Individual contributors had no visibility into their own metrics. As the organization scaled, this created gaps in accountability and made performance conversations inconsistent.

What I noticed

The data itself wasn't the problem — it already existed.

The gap was that performance evaluation lacked structure. Metrics weren't standardized, visibility didn't align with the org hierarchy, and access was limited to a small group.

Without a shared system, performance was being interpreted differently across teams, and the people being measured had no direct access to their own data.

The call I made

I led the effort to build a hierarchical performance intelligence system in Tableau — not just as a dashboard, but as a performance operating system.

The goal was to align analytics with the organization's reporting structure and shift from manager-controlled reporting → to role-based, self-service visibility.

What I did

I approached this as designing how performance should be measured and accessed, not just how it should be visualized.

  • Defined a standardized KPI framework across five domains: Pipeline, Revenue, Balances, Client Engagement, and Risk & Controls
  • Structured data to reflect reporting hierarchy so managers could evaluate both direct and indirect reports
  • Built benchmarking capabilities including stack ranking, team averages, and score distributions
  • Implemented role-based entitlements to securely expand visibility to individual contributors
  • Designed drill-down capabilities from team-level to individual performance
  • Led delivery end-to-end from requirements through deployment

This required balancing transparency with control — ensuring broader access without compromising data governance.

What changed
  • Standardized performance evaluation across five KPI domains
  • Introduced objective benchmarking through ranking and team comparisons
  • Expanded visibility from manager-only reporting → to secure, role-based self-service access
  • Enabled individual contributors to track their own performance for the first time
  • Eliminated manual report distribution and reduced dependency on ad hoc analysis

Adopted by ~300 Payments Sales managers, creating a consistent, organization-wide view of performance.

More importantly, performance conversations shifted from opinion-driven → data-driven, with a shared and consistent view across the organization.

LLM Enablement → From Platform Launch to Global Adoption at Scale
What was breaking

The firm had released its internal LLM Suite, but adoption across the Chief Administrative Office was inconsistent and unstructured.

Most employees lacked the training, confidence, and practical guidance needed to integrate LLM capabilities into their day-to-day work. There was no structured enablement program, no centralized governance, and no mechanism to scale what was working.

The platform had clear potential — but without a path from awareness to real usage, it was at risk of being shelved.

What I noticed

The gap wasn't interest — it was the absence of a structured path from awareness to practical integration.

People didn't need more information about the tool. They needed to see how it applied to their actual workflows. And without a governance layer, there was no way to capture, standardize, or scale emerging use cases across teams.

The call I made

I treated this as an adoption and behavior-change problem, not a training problem.

I decided to build an end-to-end enablement model — combining structured training, real use case integration, and governance — to ensure the platform moved from awareness → sustained, scalable usage.

What I did
  • Became a certified LLM Suite instructor to lead enablement efforts
  • Designed a practical, use case-driven curriculum focused on real workflows rather than abstract capabilities
  • Delivered 10+ global training sessions with 100+ attendees each across NAM, LATAM, EMEA, and APAC
  • Iterated on content based on feedback and emerging use cases from teams
  • Worked directly with teams to integrate LLM capabilities into day-to-day operational processes
  • Founded and led the CAO LLM Council to create a structured forum for knowledge sharing, standardization, and governance

The focus was always on practical adoption — ensuring teams could apply LLM capabilities in their work, not just understand them.

What changed
  • Delivered 10+ global sessions with 100+ attendees each across four regions
  • Established a reusable enablement framework and curriculum
  • Created a sustainable governance model through the CAO LLM Council
  • Scaled adoption across multiple sub-lines of business and global teams

Most importantly, LLM usage shifted from abstract awareness → embedded into real workflows and day-to-day execution.

Execution Governance → From Siloed Tracking to a Centralized Operating Model
What was breaking

Across multiple analytics teams, there was no centralized system to manage or track the team's Book of Work.

Projects were tracked independently through spreadsheets, email, or personal notes. There was no single view of active work, ownership, or timelines. Leadership had no reliable way to monitor progress, and coordination relied heavily on meetings — status updates consumed time that should have gone toward execution.

As work scaled across teams and dozens of concurrent projects, this led to siloed execution, limited accountability, and no structured way to assess whether work was on track, at risk, or stalled.

What I noticed

The issue wasn't that teams weren't doing the work — it was that no one could see it.

Without a shared system, execution health, ownership, and priorities were invisible at the leadership level. And at the team level, people were spending more time communicating status than actually delivering.

The operating model itself had become the bottleneck.

The call I made

I proposed building a centralized execution and governance system — not just to track projects, but to redefine how the team planned, monitored, and communicated work.

The goal was to move from meeting-heavy coordination → to a system where progress was visible, ownership was clear, and execution health could be assessed in real time.

What I did

I approached this as designing an execution operating model, not just implementing a tool.

  • Built centralized project tracking across all active work with defined ownership, timelines, and milestones
  • Designed a standardized execution health framework (Not Started, On Track, At Risk, Past Due, On Hold, Cancelled)
  • Created leadership views to provide a real-time, single source of truth across all projects
  • Led change management to transition teams away from siloed tracking and toward a shared system
  • Trained teams on structured workflows and shifted coordination from synchronous meetings → asynchronous tracking

Tools like Jira and Monday.com supported this, but the focus was on creating consistency in how work was managed and reported.

What changed
  • Unified tracking across multiple teams and projects into a single, centralized system
  • Leadership gained real-time visibility into execution health, ownership, and progress
  • Reduced reliance on status meetings by enabling asynchronous visibility
  • Standardized execution health statuses enabled earlier identification of at-risk work
  • Established a scalable operating model for managing and reporting on the Book of Work

More importantly, the team shifted from meeting-driven coordination → a system where execution was visible, accountable, and proactively managed.

Data Reconciliation → From Broken Pipelines to Trusted Enterprise Reporting
What was breaking

Following the merger of Commercial Banking and Corporate & Investment Banking, sales data flowed through a complex multi-system pipeline before reaching enterprise dashboards.

CB teams used Salesforce, CIB used Dealworks. Post-merger, deal data originated in Salesforce, flowed into Dealworks, then into GIBIE — the enterprise data warehouse — before feeding downstream analytics.

During this process, critical deal attributes — stage, size, revenue, ownership — were occasionally lost, misaligned, or incorrectly transformed. As a result, dashboards didn't always reflect actual pipeline activity or performance.

This wasn't just a data quality issue. These dashboards were used for compensation, performance evaluation, and hiring decisions — making inaccurate reporting a real business risk.

What I noticed

The issue wasn't a single broken system — it was that no one was validating data across the pipeline end-to-end.

Each system worked in isolation, but the handoffs between them were where things broke down. And because there was no structured reconciliation process, discrepancies were only discovered after stakeholders spotted inconsistencies in dashboards — by which point trust had already been impacted.

The approach

I focused on introducing a structured reconciliation layer across the pipeline — validating data as it moved from source systems through the warehouse to reporting.

The goal wasn't just to fix discrepancies, but to establish a repeatable way to catch issues early and address root causes upstream.

What I did
  • Analyzed deal-level data across Salesforce, Dealworks, and GIBIE to identify inconsistencies in key attributes
  • Built reconciliation workflows (SQL + Alteryx) to systematically compare data across systems and surface mismatches
  • Traced discrepancies back to specific breakdown points in the ETL pipeline (e.g., field mapping issues, timing gaps, missing values)
  • Implemented interim fixes to maintain reporting accuracy while longer-term solutions were developed
  • Partnered with CRM, data engineering, and warehouse teams to resolve root causes and improve pipeline reliability
  • Worked directly with business stakeholders to validate outputs and ensure reporting aligned with real-world activity

This wasn't just data validation — it was about restoring confidence in the data used to run the business.

What changed
  • Improved alignment between CRM systems, data warehouse, and reporting dashboards
  • Identified and resolved discrepancies impacting pipeline and performance metrics
  • Established a repeatable reconciliation framework to catch issues before they reached stakeholders
  • Strengthened long-term data pipeline reliability through root cause fixes

Most importantly, the organization shifted from reactively discovering data issues → proactively validating and trusting its reporting.