Palantir Metadata Lead
The Judge Group
Remote
Posted on Apr 30, 2026
The Judge Group is currently seeking a Palantir Metadata Lead to support a large federal agency. This position is remote and requires US citizenship. For immediate consideration, email your resume to rkissinger@judge.com. Please, no third parties.
- Robbie Kissinger
US citizenship required
Bachelor's degree required
This role serves as a senior technical leader responsible for establishing and scaling the data pipeline and metadata foundation that enables enterprise analytics, AI-driven capabilities, and system integration. The ideal candidate combines deep, demonstrable Palantir (including AIP) expertise, strong data architecture and governance experience, and the ability to lead teams and grow capabilities in complex, regulated environments.
Roles & Responsibilities
- Lead the design, development, and governance of enterprise data pipelines and metadata frameworks within Palantir Foundry and integrated data platforms.
- Serve as the technical and functional lead for data ingestion, transformation, and metadata management across structured and unstructured data sources.
- Define and enforce metadata standards, data models, ontologies, and data dictionaries to enable scalable search, analytics, and cross-system integration.
- Oversee implementation of end-to-end data pipelines, including ingestion, validation, transformation, and delivery into downstream platforms (e.g., Palantir, Databricks).
- Establish and govern data quality, validation, and exception handling processes, ensuring completeness, accuracy, and traceability of data assets.
- Ensure alignment of pipelines and metadata with enterprise architecture, system-of-record requirements, and integration patterns.
- Ensure pipelines effectively support document-based ingestion workflows, including integration with OCR/ICR outputs and downstream metadata extraction processes.
- Enable AI-ready data foundations, supporting downstream capabilities such as semantic search, entity resolution, and advanced analytics.
- Partner with data science and engineering teams to ensure data pipelines and metadata support AI/ML use cases and analytical workflows.
- Drive data governance and lifecycle management, including schema versioning, lineage tracking, auditability, and compliance with security and privacy requirements.
- Oversee integration across platforms (e.g., AWS, Databricks, Palantir), ensuring scalable, secure, and reliable data exchange.
- Lead and mentor teams in a matrixed, cross-functional environment, providing technical direction and quality oversight.
- Engage with senior stakeholders to define data strategy, prioritize initiatives, and translate business needs into technical solutions.
- Operate within an Agile delivery model, overseeing backlog prioritization, technical design reviews, and iterative delivery across workstreams.
What You Will Need:
- Bachelor’s degree
- A Minimum of EIGHT (8) years of experience in data engineering, data architecture, or platform integration, with increasing leadership responsibility.
- U.S. Citizenship required and ability to obtain and maintain a Public Trust clearance.
- Demonstrated, hands-on expertise in Palantir Foundry (required), including:
- Designing and implementing production-grade data pipelines
- Developing ontologies, data models, and relationship mappings
- Integrating data across multiple enterprise systems
- Demonstrated experience with Palantir AIP (required), including enabling AI-driven workflows (e.g., search, analytics, or decision-support use cases).
- Proven experience leading enterprise-scale Palantir implementations, including architecture, delivery, and governance.
- Strong experience designing and managing large-scale data pipelines in cloud environments (AWS preferred).
- Experience integrating Palantir with Databricks and/or Spark-based platforms for advanced data processing and analytics.
- Expertise in metadata management and data governance, including:
- Data dictionaries and controlled vocabularies
- Data lineage and traceability
- Schema versioning and change management
- Experience implementing data quality frameworks, including validation rules, exception handling, and reconciliation processes.
- Strong understanding of data lake/lakehouse architectures (e.g., AWS S3, Databricks).
- Proficiency in Python, SQL, and/or other relevant languages for data pipeline development.
- Experience designing systems that support AI/ML and analytics use cases, including unstructured data and document processing pipelines (e.g., OCR/ICR outputs).
- Strong leadership and communication skills, with experience managing cross-functional and matrixed teams.
- Experience delivering complex solutions in an Agile environment.
By providing your phone number, you consent to: (1) receive automated text messages and calls from the Judge Group, Inc. and its affiliates (collectively “Judge”) to such phone number regarding job opportunities, your job application, and for other related purposes. Message & data rates apply and message frequency may vary. Consistent with Judge's Privacy Policy, information obtained from your consent will not be shared with third parties for marketing/promotional purposes. Reply STOP to opt out of receiving telephone calls and text messages from Judge and HELP for help.