Risepoint logo

Risepoint

​Senior AI Engineer (Evals/Observability Concentration)

🇺🇸 Remote - US 🕑 Full-Time 💰 TBD 💻 Software Engineering 🗓️ March 7th, 2026
LMS Python C#

Edtech.com's Summary

Risepoint is hiring a Senior AI Engineer (Evals/Observability Concentration). The role involves designing, implementing, and operationalizing AI systems focused on evaluation frameworks, multi-agent workflows, and observability to ensure quality and reliability, with direct contribution to an AI-powered Student Journey Platform central to the company's long-term strategy.

Highlights
  • Build and maintain evaluation frameworks including LLM-as-Judge, rubric-based scoring, and regression test suites to measure output quality, reliability, and drift, and debug production issues.
  • Architect and implement multi-agent workflows with coordination, tool usage, and failure handling patterns.
  • Build structured observability into AI systems such as tracing, prompt/version tracking, evaluation logging, and cost/latency monitoring.
  • Define and enforce quality gates for AI features using automated evaluations prior to production release.
  • Optimize inference performance focusing on latency, token usage, caching, batching, and routing across models.
  • Design and implement Retrieval-Augmented Generation (RAG) systems and Model Context Protocol (MCP) servers using structured and unstructured enterprise data.
  • Develop and manage fine-tuning workflows including dataset preparation, versioning, and validation.
  • Required technical skills include experience in Python, C#, Java or similar languages, LLM evaluation and observability tooling (Langfuse, LangSmith, OpenTelemetry), and AI system design.
  • Experience implementing guardrails, policy enforcement, and safety layers in AI systems leveraging LLM-as-Judge for validation and continuous improvement is essential.
  • Preferred qualifications include familiarity with performance optimization for LLMs, production-grade RAG system development, contributions to AI standards, and deployment experience in cloud environments (AWS, Azure, GCP) and Databricks.

​Senior AI Engineer (Evals/Observability Concentration) Full Description

Risepoint is an education technology company that provides world-class support and trusted expertise to more than 100 universities and colleges. We primarily work with regional universities, helping them develop and grow their high-ROI, workforce-focused online degree programs in critical areas such as nursing, teaching, business, and public service. Risepoint is dedicated to increasing access to affordable education so that more students, especially working adults, can improve their careers and meet employer and community needs.

The Impact You Will Make 

 

Risepoint is developing an AI-powered Student Journey Platform and is seeking a Senior AI Engineer with deep expertise in Retrieval-Augmented Generation (RAG), multi-agent architectures, and LLM evaluation frameworks. This role focuses on designing, implementing, and operationalizing AI systems with a strong emphasis on structured evaluation (including LLM-as-Judge), measurable quality, and production-grade reliability. The ideal candidate has experience integrating LLMs with enterprise data sources, building testable and observable AI workflows, and improving system performance through rigorous evaluation and iteration. This role contributes directly to a platform that is central to the organization's long-term strategy. 

How You Will Bring Our Mission to Life 

What You Will Do 

  • Build and maintain evaluation frameworks (LLM-as-Judge, rubric-based scoring, regression test suites) to measure output quality, reliability, and drift with the responsibility of debugging production level issues as detected. 

  • Architect and implement multi-agent workflows with clear coordination, tool usage, and failure handling patterns. 

  • Build structured observability into AI systems (tracing, prompt/version tracking, evaluation logging, cost and latency monitoring). 

  • Define and enforce quality gates for AI features using automated evals prior to production release. 

  • Optimize inference performance (latency, token usage, caching, batching, routing across models). 

  • Collaborate with product and engineering teams to translate business requirements into testable AI system designs. 

  • Contribute to code reviews, architectural discussions, and internal standards for AI development. 

  • Design and implement Retrieval-Augmented Generation (RAG) systems and Model Context Protocol (MCP) servers using structured and unstructured enterprise data. 

  • Develop and manage fine-tuning workflows (SFT, preference optimization, or related techniques) including dataset preparation, versioning, and validation. 

What Success Looks Like 

  • RAG pipelines return grounded, source-attributed responses with minimal hallucination. 

  • Evals are automated, reproducible, and integrated into CI/CD or release workflows. 

  • Multi-agent workflows are observable, testable, and maintainable as complexity increases. 

How Impact Will be Measured 

  • AI systems demonstrate measurable improvements in quality using defined evaluation benchmarks. 

  • Fine-tuned models and/or programmatic solutions show validated performance gains over baseline foundation models. 

  • AI systems meet defined SLAs for latency, reliability, and cost. 

What You'll Bring to the Team 

 

Experience That Matters Most 

  • 3-5 years of full stack engineering experience with strong fundamentals in object-oriented programming, applicable design patterns, and AI-focused system design. 

  • Professional experience in Python, C#, Java, or a similar language used in production systems. 

  • Experience with LLM evaluation and observability tooling (e.g. Langfuse, LangSmith, OpenTelemetry-based tracing, custom evaluation harnesses). 

  • Experience implementing guardrails, policy enforcement, and safety layers in AI driven systems while leveraging  LLM-as-Judge for validation and continuous improvement.  

Experience That's Great to Have 

  • Familiarity with performance optimization techniques for LLM-based systems (latency, caching, routing, batching). 

  • Experience building production-grade RAG systems (retrieval pipelines, chunking strategies, embeddings, reranking, context construction). 

  • Experience contributing to internal AI standards, reusable frameworks, or platform-level tooling. 

  • Experience deploying AI systems in cloud environments (AWS, Azure, GCP). Experience in Databricks (model serving endpoints, ML Flow) 

Risepoint is an equal-opportunity employer and supports a diverse and inclusive workforce.