Our VisionTunnl is creating a future where artificial intelligence helps brands create meaningful connections with their stakeholders. We are building the best audience intelligence solution to help brands and organizations define their key audiences, know what they think, where to communicate with them, and produce real-time intelligence on how opinions and perceptions are changing.Our MissionTunnl builds better connections between brands, influencers, and stakeholders. Our mission is to redefine how audiences are created, reached, and activated using audience intelligence. We help brands, advocacy organizations, and sell-side solutions be more effective and move faster.Role OverviewWe need a Data Operations Analyst to help us structure and optimize one of professional data tables. This person will work specifically on our Opinion Marker’s product which combines multiple datasets to create high-value Opinion Maker audiences for our clients. You will develop and implement a standardized taxonomy to improve searchability, segmentation, and data usability. You will also contribute to enhancing match accuracy between datasets by applying SQL-based solutions and working alongside the engineering team to refine automated processes. Your work will directly impact how Tunnl identifies and categorizes influential decision-makers across policy, healthcare, and finance, making it easier for clients to leverage these audiences for advertising and insights. The Data Operations Analyst will report to our Product Data Scientist. This role will be a hybrid role based out of our headquarters in Arlington, VA.Who You AreYou are a detail-oriented data analyst with strong SQL expertise and a passion for structuring, optimizing, and analyzing datasets for real-world applications. You thrive in a collaborative environment, enjoy solving complex data challenges, and take a methodical approach to improving data usability and accuracy. Professional experience in a data analysis, data engineering, or database management role, outside of academic projects Strong SQL skillset, able to write complex queries, optimize performance, and troubleshoot data issues Experience working with large datasets and entity resolution techniques Ability to structure unorganized data into a usable, scalable taxonomy Ability to analyze data, identify issues, and implement effective solutions Comfortable translating complex technical findings into clear, concise reports and presentations for non-technical stakeholders Close attention to detail required for this role Demonstrated ability to work well within cross-functional teams Preferably (Nice-to-haves) - Proficiency with Python for data manipulation and automation - Experience with AWS services, particularly S3 for data storage and retrieval - Familiarity with Databricks and Spark for large-scale data processing - Exposure to AI-based entity resolution or geocoding techniques - Experience with version control tools such as Git/GitHub for collaboration and code management If there are items under the “Who You Are” section that you are working towards or would like to pursue, we still encourage you to apply. Our promise is that a real person is reviewing your application when received.What You’ll DoDay to Day Develop and implement a taxonomy for professional audiences by integrating and aligning data from various vendors Optimize the searchability and segmentation of professional audiences within Tunnl’s internal tooling Document and maintain a structured data framework to ensure long-term usability and scalability Analyze and refine matching between datasets using SQL queries to identify discrepancies and improve accuracy Collaborate with engineering to integrate geocoding and AI-based confidence scoring to enhance match rates Research and develop new techniques to improve entity resolution across datasetsBuild and execute SQL queries to support internal teams with audience segmentation and custom data requests Develop automated reports to monitor data integrity, match rates, and audience performanceAssist with data validation processes to ensure high-quality outputs for clients Manage hand-matching workflows that ensure high-profile matches are correct Oversee and create QA processes for hand-matching effortsIn 30 days, you’ll be doing a Tunnl deep dive. This would entail learning the product, holding meetings with current stakeholders and the people who have been helping to fill the gap while we hire for the role.In 60 days, you’ll be starting to service requests on your own and potentially working on shorter-term assignments involving the data set. At this point, someone in the role will also be responsible for improvement ideation. In 90 days, you’ll be presenting the team with a hypothesis and proposal on how to best move forward with the database. In 6 months, you’ll be a Subject Matter Expert on this database and have full ownership of it.In a year, you’ll be making improvements as well as branching into other projects.Why You Should Apply Ownership of your growth in a fast-scaling startup where your impact truly matters Comprehensive benefits with excellent medical, vision, and dental coverage401(k) matching to be able to invest in your future with our competitive retirement plan Unlimited, flexible Paid Time Off (PTO) to be able to recharge and reset on your terms. We trust you to do your best work A team driven by curiosity, teamwork, integrity, and a shared passion for solving big challenges Immersion working alongside some of the brightest minds in the adtech/martech space, right here in DC!