Skip to content
Vladimir Chavkov
Go back

Getting Started with Azure AI Foundry: Build Your First AI Application

Edit page

Azure AI Foundry is Microsoft’s unified platform for building, evaluating, and deploying AI applications. Previously known as Azure AI Studio, Foundry brings together model hosting, prompt engineering, orchestration flows, and evaluation tooling into a single workspace. If you have worked with individual Azure AI services before, Foundry is where they converge.

What Azure AI Foundry Provides

The platform is organized around several core capabilities:

Setting Up Your First Project

Step 1: Create an AI Foundry Hub

A hub is the top-level resource that provides shared infrastructure (networking, identity, storage) for your AI projects.

  1. Go to the Azure portal and search for Azure AI Foundry.
  2. Click Create a hub. Select your subscription and resource group.
  3. Choose a region that supports the models you plan to use (East US and West Europe have the broadest model availability).
  4. The hub automatically provisions a Storage Account and Key Vault.

Step 2: Create a Project

Within the hub, create a project. A project scopes your work: it has its own model deployments, prompt flows, datasets, and evaluation runs.

  1. Open your hub in the Foundry portal at ai.azure.com.
  2. Click New project and give it a name.
  3. The project inherits the hub’s networking and identity settings.

Step 3: Deploy a Model

Navigate to the Model catalog within your project. For a first deployment, GPT-4o is a practical choice.

  1. Search for gpt-4o in the catalog.
  2. Click Deploy. Choose Serverless API for a pay-per-token deployment with no reserved capacity, or Managed Compute for dedicated throughput.
  3. Set the deployment name (e.g., gpt-4o-main).
  4. Configure rate limits and content filters. The default content filter blocks high-severity harmful content.
  5. Click Deploy. The endpoint is ready in under a minute for serverless deployments.

Once deployed, you can test the model immediately in the Playground by sending prompts and adjusting parameters like temperature and max tokens.

Building a Prompt Flow

Prompt Flow lets you build multi-step LLM pipelines. A typical flow takes user input, enriches it with context, sends it to a model, and processes the response.

Create a Basic Flow

  1. Go to Prompt Flow in your project and click Create.
  2. Choose Standard flow (or start from a template like “Chat with your data”).
  3. The flow editor opens with a visual DAG (directed acyclic graph).

A minimal flow has three nodes:

Configure the LLM Node

In the LLM node, write your prompt template using Jinja2 syntax:

system:
You are a helpful assistant that answers questions about Azure cloud services.
Keep answers concise and technical.
user:
{{question}}

Connect the node to your gpt-4o-main deployment. Set temperature to 0.7 for balanced responses.

Test the Flow

Click Run and provide a test input like “What is the difference between Azure Functions and Container Apps?” The flow executes each node in sequence, and you can inspect the output at every step.

Evaluating Your Flow

Evaluation is where Foundry adds significant value over manual testing. Create an evaluation run to measure your flow against a dataset.

  1. Prepare a test dataset (JSON or CSV) with input questions and expected answers.
  2. Go to Evaluation and click New evaluation.
  3. Select your flow and dataset. Choose metrics: Groundedness, Relevance, Coherence, Fluency.
  4. Run the evaluation. Foundry scores each response and provides aggregate metrics.

Use evaluation results to iterate on your prompts, adjust model parameters, or compare different models for the same task.

From Prototype to Production

Once your flow produces satisfactory results:

  1. Deploy the flow as a managed online endpoint with a single click.
  2. Monitor token usage, latency, and error rates in the built-in dashboard.
  3. Add content safety filters appropriate for your use case.
  4. Integrate the endpoint into your application using the REST API or Azure SDKs.

Azure AI Foundry removes the infrastructure overhead from AI development. You get model hosting, prompt engineering, evaluation, and deployment in one place. Start with the playground to experiment, build a prompt flow to formalize your logic, evaluate to measure quality, and deploy when ready.


Edit page
Share this post on:

Previous Post
Building a RAG Application with Azure OpenAI and AI Search
Next Post
Next.js Server Actions: Handling Forms Without API Routes