Nokia Global logo

Senior GenAI Engineer

Nokia Global
3 days ago
Full-time
On-site
Finland
Description

Senior GenAI Engineer is pivotal in developing cutting-edge AI platforms and systems within MI—TS—R&I, directly influencing global technology standards and Nokia's future connectivity. 

You will be responsible for the end-to-end implementation of advanced AI solutions, from model selection and fine-tuning to deploying production agentic systems that empower researchers and automate complex workflows.



Responsibilities
  • State‑of‑the‑art agents: build plan‑act‑verify systems with structured tool/function calling, orchestration graphs/state machines, routing, and task memory/state.
  • Production readiness: design reliable tools/APIs (schemas, versioning), idempotency, timeouts/retries, rate limits, caching, safe termination, and deterministic fallbacks; integrate sandboxes where needed.
  • Agent evaluation & harnesses: scenario‑based end‑to‑end success metrics plus trajectory/step scoring, tool‑use correctness, grounding/citation fidelity, robustness, and security red‑teaming; wire regression evals into CI/CD.
  • RAG & knowledge integration: hybrid search, reranking, chunking, long‑context routing and (where useful) graph/ontology augmentation with strong grounding.
  • LLMOps & platform engineering: reproducible pipelines for data curation, SFT/PEFT (e.g., LoRA), quantization/distillation, release/rollback; operate serving stacks and optimize GPU utilization, latency and cost.
  • Safety & observability: prompt‑injection/exfiltration defenses, least‑privilege access controls, compliance‑ready logging; tracing across agent steps/tool calls and telemetry‑driven iteration.
  • Developer experience & collaboration: SDKs/templates/docs and reference architectures; partner with researchers, product teams and IT/security to translate workflows into reliable AI capabilities.


Qualifications

Must-have:

  • MS in Computer Science, AI/ML, Telecommunications or related field (or equivalent practical experience).
  • Hands‑on experience deploying and operating LLMs in production; familiarity with modern serving stacks (vLLM/TGI/Triton/TensorRT‑LLM/llama.cpp) and GPU fundamentals. LLMOps/MLOps: CI/CD, experiment tracking, model registry, containers, Kubernetes and infrastructure‑as‑code.
  • Strong RAG knowledge: embeddings, vector DBs, rerankers, document processing pipelines and grounding/citation.
  • Practical experience building tool‑using agents and orchestrations (LangGraph/LangChain, Semantic Kernel or custom), including reliable tool APIs and schemas.
  • Experience productionizing agentic systems: reliability patterns, safety gates, access control and audit logging.
  • Experience with evaluation & observability: scenario suites, regression harnesses, tracing, prompt/version management, A/B tests and guardrails.

Nice-to-have:

  • Fine‑tuning (SFT, DPO/IPO), PEFT/LoRA, synthetic data and data quality tooling.
  • Multimodal models (doc understanding/OCR, speech/audio) in enterprise settings.
  • Privacy‑preserving ML/confidential computing or secure enclaves for inference.
  • Domain knowledge in telecom/radio/multimedia/networking/standards; publications or open‑source in LLM/agent systems.

How you work

  • Pragmatic and impact‑driven: start from user workflows, ship iteratively, and measure outcomes.
  • Correctness and safety first: design for failure modes, test extensively, and treat security as a first‑class requirement.
  • Collaborative and communicative: clear documentation, reviews and cross‑functional partnering.