Show detailed explanations and code examples

Free Hosting Solutions for Web Apps

Launch your projects with zero-cost, professional-grade hosting services

Filter by:
Showing all 23 resources

Static Site Hosting Free Tier

GitHub Pages

Hosts static websites directly from a GitHub repository. Free for personal and organization use.

  • ~100 GB/month bandwidth
  • 1 GB max site size
  • Free SSL certificates
  • Custom domain support
No server-side code (HTML, CSS, JS only)
What is GitHub Pages and when should I use it?

What is GitHub Pages?

GitHub Pages is a free service that turns your code on GitHub (a popular code storage website) into a live website anyone can visit.

When would I use this?

Perfect for your first website if you're learning HTML, CSS, and JavaScript. Great for:

  • Personal portfolio websites
  • Project documentation sites
  • Simple blogs or informational websites

How it works:

You create a special repository (folder) on GitHub with your website files. GitHub automatically turns these files into a live website with a URL like yourusername.github.io.

Important to understand:

GitHub Pages only works with "static" websites - meaning just HTML, CSS, and JavaScript files. You can't run server code like PHP, Python, or databases directly. For beginners, this is actually perfect, as it keeps things simple!

How to get started with GitHub Pages

Getting Started in 5 Minutes

  1. Create a free GitHub account if you don't already have one
  2. Create a new repository named yourusername.github.io (using your actual username)
  3. Upload your HTML, CSS, and JavaScript files to this repository
  4. Wait a few minutes, then visit yourusername.github.io in your browser
Pro tip:

If you don't have website files ready yet, you can choose a theme when creating your repository. GitHub will generate a basic site for you to start with!

Example Structure

Your repository should have at least an index.html file. A basic structure might be:

yourusername.github.io/
  ├── index.html        (your homepage)
  ├── about.html        (about page)
  ├── css/
  │   └── style.css     (your styles)
  └── js/
      └── script.js     (your JavaScript)
                                    

Firebase Hosting

Google

Google Firebase's static web hosting with the Spark Plan.

  • 1 GB storage
  • 10 GB/month data transfer
  • Free SSL certificates
  • Global CDN
Supports single-page apps and HTTP functions integration
What is Firebase Hosting and why use it?

What is Firebase Hosting?

Firebase Hosting is Google's way to put your website online quickly and easily. It's specifically designed to make modern web apps work really well.

When would I use this?

Great for your first "real" web application, especially if you're using modern JavaScript frameworks like React, Vue, or Angular. Perfect for:

  • Interactive web applications
  • Single-page applications (SPAs)
  • Projects that need to grow with more features later

How it works:

You install the Firebase tools on your computer, connect your project, and with a few commands, your site is live on a domain like yourproject.web.app. Firebase handles all the server configuration for you!

What makes Firebase different:

It's part of a larger ecosystem of tools, so when you're ready to add user logins or save data, you can add those services without changing your hosting setup.

Getting started with Firebase Hosting

Setting Up Firebase Hosting

  1. Create a free Google account if you don't have one
  2. Go to firebase.google.com and create a new project
  3. Install the Firebase CLI tools on your computer with: npm install -g firebase-tools
  4. Login with: firebase login
  5. In your project folder, run: firebase init hosting
  6. Deploy with: firebase deploy
Important:

Like GitHub Pages, Firebase Hosting is for static files (HTML, CSS, JS), but it has better integration with other services. If your site needs to store data or have user accounts, Firebase offers other free services that work seamlessly with Firebase Hosting.

When to choose Firebase over GitHub Pages

  • When you need faster global content delivery (better CDN)
  • When you plan to add a database or authentication later
  • When you need more advanced deployment options
  • When you're using a modern JavaScript framework like React, Vue, or Angular
Connecting with other Firebase services

Firebase Ecosystem for Beginners

Firebase Hosting works seamlessly with these other Firebase services:

  • Firebase Authentication: Add login to your website
  • Firestore: Store and retrieve data
  • Cloud Functions: Run server-side code
  • Firebase Storage: Store files uploaded by users
Real-world example:

A simple social media app could use Firebase Hosting for the website, Authentication for user login, Firestore to store posts, and Storage for user profile pictures - all using the free tier!

AWS Amplify Hosting

Amazon

Provides free static web hosting with a global CDN, free SSL, and continuous deployment from Git.

  • 5 GB storage
  • 15 GB/month bandwidth
  • Free SSL certificates
  • Global CDN
What is AWS Amplify Hosting?

AWS Amplify Explained

AWS Amplify Hosting is Amazon's service for hosting static websites and web applications. It provides a complete solution to build, deploy, and host your site with built-in CI/CD (continuous integration and delivery).

When to use AWS Amplify

Consider AWS Amplify when you need:

  • Automatic deployment when you push code to Git
  • Preview URLs for each branch of your code
  • Password protection for certain environments
  • Easy integration with other AWS services
  • Global content delivery for speed
Key advantage:

Amplify is especially good for teams working on projects with multiple developers or for sites that need different staging environments (development, testing, production).

Getting started with AWS Amplify

Basic Setup Steps

  1. Create an AWS account (free tier eligible)
  2. Go to the AWS Amplify Console
  3. Choose "Host a web app"
  4. Connect to your GitHub, GitLab, or Bitbucket repository
  5. Configure branch settings
  6. Deploy your app

Works Well With

  • React, Vue, Angular, or any static site generator
  • Single-page applications (SPAs)
  • Static site generators like Gatsby, Hugo, or Jekyll
Important note:

Like other static hosting services, Amplify doesn't run server-side code such as PHP or Ruby. However, it can be connected to serverless functions through AWS Lambda if you need backend functionality.

Cloudflare Pages

Deploy static sites to Cloudflare's global edge network. Completely free for unlimited sites.

  • Unlimited bandwidth
  • 500 builds per month
  • Up to 20k files per site
  • Global edge network
What is Cloudflare Pages?

Cloudflare Pages Explained

Cloudflare Pages is a hosting platform for static websites and JAMstack applications. It stands out by offering unlimited bandwidth on its free tier, making it ideal for projects that might receive significant traffic.

When to use Cloudflare Pages

Cloudflare Pages is particularly useful when you need:

  • Unlimited bandwidth (no overage charges ever)
  • Extremely fast global content delivery
  • Automatic HTTPS for custom domains
  • Preview deployments for each Git branch
  • Integration with Cloudflare's other services
Notable advantage:

The unlimited bandwidth means you never have to worry about your site going down due to a sudden traffic spike, making it ideal for sites that might go viral or have unpredictable traffic patterns.

Getting started with Cloudflare Pages

Basic Setup Steps

  1. Create a free Cloudflare account
  2. Go to the Cloudflare dashboard and select "Pages"
  3. Connect your GitHub or GitLab account
  4. Select your repository
  5. Configure your build settings
  6. Deploy your site

Framework Support

Cloudflare Pages works exceptionally well with:

  • React (Create React App, Next.js)
  • Vue.js (Nuxt.js)
  • Angular
  • Static site generators (Gatsby, Hugo, Jekyll)
  • Custom build configurations
Advanced usage:

Cloudflare Pages can be combined with Cloudflare Workers (their serverless functions) to add dynamic functionality to otherwise static sites, creating full-stack applications without traditional servers.

Azure Static Web Apps

Microsoft

Microsoft's offering for hosting static content and serverless APIs.

  • 100 GB bandwidth/month
  • 2 custom domains
  • Free SSL certificates
  • Global CDN distribution
Integrated authentication and authorization
What is Azure Static Web Apps?

Azure Static Web Apps Explained

Azure Static Web Apps is Microsoft's platform for hosting static websites with additional capabilities like API integration and built-in authentication. It's Microsoft's answer to services like Netlify or Vercel, designed to simplify deployment and hosting.

When to use Azure Static Web Apps

Azure Static Web Apps is particularly valuable when you need:

  • Integration with Azure Functions for backend APIs
  • Built-in authentication and role-based access control
  • Staging environments for pull requests
  • Integration with GitHub Actions for CI/CD
  • A Microsoft-ecosystem solution
Key differentiator:

The built-in authentication system allows you to add login capabilities to your static site without writing complex backend code - a significant advantage over most other static hosting platforms.

Getting started with Azure Static Web Apps

Basic Setup Steps

  1. Create a free Azure account
  2. In the Azure portal, search for "Static Web Apps"
  3. Click "Create" and link your GitHub repository
  4. Configure build settings (framework presets available)
  5. Set up API location if using Azure Functions
  6. Review and create the resource

Features and Integration

Azure Static Web Apps works well with:

  • Modern JavaScript frameworks (React, Angular, Vue, Svelte)
  • Static site generators (Hugo, Gatsby, Next.js)
  • Azure Functions for serverless API endpoints
  • GitHub or Azure DevOps repositories
Important consideration:

While Azure Static Web Apps can be used completely standalone, you'll get the most value when combining it with other Azure services like Functions, Cosmos DB, or Application Insights.

Authentication and API features

Built-in Authentication

One of the standout features is the integrated authentication system that supports:

  • Microsoft/Azure Active Directory
  • GitHub
  • Twitter
  • Facebook
  • Google

This authentication can be enabled with minimal configuration and no custom code required.

API Integration

Azure Static Web Apps makes it easy to add serverless APIs to your static site:

  • Automatically detects and integrates Azure Functions
  • Provides seamless backend capabilities
  • Uses the same authentication context
  • Supports proxying to other API endpoints
Full-stack solution:

With Azure Static Web Apps + Azure Functions, you can build complete full-stack applications without managing traditional servers, similar to combining Netlify with Netlify Functions or Vercel with Next.js API routes.

Managed Databases Free Tier

Firebase Firestore

Google

Serverless NoSQL document database (part of Google Firebase). The Spark free tier.

  • 1 GiB of storage
  • 50k document reads/day
  • 20k writes/day
  • 20k deletes/day
What is Firestore and why do I need a database?

What is Firestore?

Firestore is a database that lives in the cloud. Think of it like a giant spreadsheet or filing cabinet that stores all the information your app needs, but it's designed specifically for web and mobile apps.

Why you need a database:

Without a database, any information entered by users or created in your app will be lost when they close their browser. A database lets you permanently save and retrieve information.

When would I use this?

Use Firestore when your app or website needs to save information, like:

  • User profiles and preferences
  • Content for your app (posts, comments, etc.)
  • Game scores or progress tracking
  • Any data you want to store and retrieve later
How Firestore works (NoSQL database basics)

NoSQL Database Basics

Unlike traditional databases that use tables, Firestore is a "NoSQL" database that stores data in "documents" (similar to JSON objects) that are grouped into "collections." It's designed to be easy to use from JavaScript code.

Structure Example

If you were building a simple blog, you might have:

firestore-database/
  ├── users/                  (a collection)
  │   ├── user123/            (a document with ID "user123")
  │   │   ├── name: "John"    (fields inside the document)
  │   │   ├── email: "john@example.com"
  │   │   └── joinDate: "2023-01-15"
  │   └── user456/
  │       └── ...
  │
  └── posts/                  (another collection)
      ├── post1/              (a document with ID "post1")
      │   ├── title: "My First Post"
      │   ├── content: "Hello world..."
      │   ├── authorId: "user123"
      │   └── date: "2023-01-20"
      └── post2/
          └── ...
                                    

How to Use (Simple Examples)

Adding data to Firestore:

// Add a new document to the "users" collection
db.collection("users").add({
    name: "John",
    email: "john@example.com",
    joinDate: new Date()
});
                                    

Reading data from Firestore:

// Get all posts
db.collection("posts").get().then((snapshot) => {
    snapshot.docs.forEach((doc) => {
        console.log(doc.id, doc.data());
    });
});
                                    
Getting started with Firestore for beginners

Setting Up Firestore

  1. Create a Firebase project at firebase.google.com
  2. Navigate to "Firestore Database" and click "Create database"
  3. Start in "test mode" for development (you'll add security rules later)
  4. Add the Firebase SDK to your project (instructions will be shown)

Common Use Cases for Beginners

  • To-do list app: Store task items and their completion status
  • Blog: Store posts, comments, and user information
  • Game: Store high scores and player progress
  • E-commerce: Store product information and user carts
Important to understand:

The free tier limits how many reads and writes you can do each day (50k reads/20k writes). This is plenty for learning and small projects, but for apps with lots of users, you might eventually need a paid plan.

AWS DynamoDB

Amazon

NoSQL key-value and document database. Free tier includes generous resources for applications.

  • 25 GB storage
  • 25 WCU and 25 RCU
  • Up to 200M requests/month
  • Always free tier
What is AWS DynamoDB?

DynamoDB Explained

AWS DynamoDB is Amazon's fully managed NoSQL database service. It's designed to provide fast and predictable performance with seamless scalability, even with very large amounts of data.

When to use DynamoDB

DynamoDB is a great choice when you need:

  • A database that can handle massive scale
  • Consistent single-digit millisecond response times
  • Simple key-value lookups or document storage
  • A fully-managed database (no administration)
  • Integration with other AWS services
WCU and RCU Explained:

WCU (Write Capacity Units) and RCU (Read Capacity Units) are how DynamoDB measures performance capacity. The free tier provides 25 units of each, which is enough for many small to medium applications with thousands of users.

Getting started with DynamoDB

Basic Setup Steps

  1. Create or sign in to an AWS account
  2. Navigate to the DynamoDB console
  3. Click "Create table"
  4. Specify table name, primary key, and optional settings
  5. Choose the "On-demand" capacity mode for new projects
  6. Create the table and start using it

Data Modeling Basics

In DynamoDB, your data structure is organized into:

  • Tables: Collections of data items
  • Items: Similar to rows in traditional databases
  • Attributes: Data fields within each item
  • Primary Key: Uniquely identifies each item (required)
  • Sort Key: Optional second part of the key for organizing data
Important note:

DynamoDB works best when you design your table structure around your query patterns. Unlike traditional SQL databases, you should know what queries you'll need before designing your tables.

Code examples and common operations

Basic Operations with JavaScript SDK

Adding an item:

const AWS = require('aws-sdk');
const dynamoDB = new AWS.DynamoDB.DocumentClient();

async function addUser(userId, name, email) {
  const params = {
    TableName: 'Users',
    Item: {
      userId: userId,
      name: name,
      email: email,
      createdAt: Date.now()
    }
  };
  
  await dynamoDB.put(params).promise();
  console.log('User added successfully');
}
                                    

Getting an item by key:

async function getUser(userId) {
  const params = {
    TableName: 'Users',
    Key: {
      userId: userId
    }
  };
  
  const result = await dynamoDB.get(params).promise();
  return result.Item; // Returns the user or undefined
}
                                    

Common Query Patterns

DynamoDB excels at these common operations:

  • Get item by exact primary key
  • Query items with the same partition key
  • Scan entire tables (use sparingly)
  • Use secondary indexes for flexible queries
  • Batch operations for efficiency

Cloudflare D1

Beta

Cloudflare's serverless SQL database (currently in Beta).

  • 500,000 requests/month
  • 1 GB storage
  • SQL-based queries
Beta service - limits may change
What is Cloudflare D1?

Cloudflare D1 Explained

Cloudflare D1 is a SQLite-compatible, serverless SQL database that runs at Cloudflare's edge network. Unlike most NoSQL databases offered by competitors, D1 allows you to use familiar SQL queries while still maintaining serverless benefits.

When to use Cloudflare D1

Consider D1 when you need:

  • A database with standard SQL query capabilities
  • Global distribution for low-latency data access
  • Simple integration with Cloudflare Workers
  • No database server to manage
  • Familiarity with SQL rather than NoSQL query languages
Beta Status Note:

As of 2023, D1 is still in beta. While it's stable enough for projects, be aware that features, limits, and pricing might change when it reaches general availability.

Getting started with Cloudflare D1

Basic Setup Steps

  1. Create a Cloudflare account
  2. Install Wrangler CLI: npm install -g wrangler
  3. Authenticate with Wrangler: wrangler login
  4. Create a D1 database: wrangler d1 create my-database
  5. Create tables using SQL commands
  6. Connect to your database from Cloudflare Workers

Example Usage

Creating a table with Wrangler CLI:

wrangler d1 execute my-database --command "
  CREATE TABLE users (
    id INTEGER PRIMARY KEY,
    name TEXT NOT NULL,
    email TEXT UNIQUE,
    created_at DATETIME DEFAULT CURRENT_TIMESTAMP
  )
"

Accessing D1 from a Cloudflare Worker:

export interface Env {
  DB: D1Database;
}

export default {
  async fetch(request: Request, env: Env) {
    // Query the database
    const { results } = await env.DB.prepare(
      "SELECT * FROM users ORDER BY created_at DESC LIMIT 10"
    ).all();
    
    // Return the results as JSON
    return Response.json(results);
  }
};
Comparing D1 to other database options

D1 vs Other Options

How D1 compares to other database services:

D1 Advantage Compared To
SQL queries vs. proprietary query languages DynamoDB, Firestore
Edge deployment for lower latency Traditional SQL databases
No connection management needed MySQL, PostgreSQL
Simple integration with Cloudflare Workers Any non-Cloudflare database
Ideal use case:

D1 works best for applications that need SQL capabilities but don't require complex relational database features. It's perfect for content management systems, user data storage, and applications using Cloudflare Workers.

Azure Cosmos DB

Microsoft

Microsoft's globally distributed, multi-model database service.

  • 1000 RU/s throughput
  • 25 GB storage
  • Global distribution
  • Multiple APIs (SQL, MongoDB, etc.)
What is Azure Cosmos DB?

Azure Cosmos DB Explained

Azure Cosmos DB is Microsoft's globally distributed, multi-model database service. Its standout feature is flexibility: you can choose from multiple data models (document, key-value, graph, etc.) and access them through various APIs (SQL, MongoDB, Cassandra, etc.).

When to use Azure Cosmos DB

Consider Cosmos DB when you need:

  • Global distribution for low-latency data access worldwide
  • Multi-model support (store different types of data in one database)
  • Strong integration with other Azure services
  • Guaranteed millisecond response times
  • Automatic scaling without server management
RU/s Explained:

RU/s (Request Units per second) is how Cosmos DB measures throughput. The free tier includes 1000 RU/s, which is enough for simple applications with thousands of users. A single point read typically costs 1 RU, while writes and more complex queries use more.

Getting started with Azure Cosmos DB

Basic Setup Steps

  1. Create an Azure account
  2. In the Azure Portal, create a new Cosmos DB account
  3. Choose your API preference (SQL, MongoDB, etc.)
  4. Select free tier pricing
  5. Create a database and container within your account
  6. Start adding data using your chosen API

Available APIs

  • Core (SQL): Microsoft's native API with SQL-like queries
  • MongoDB: Use standard MongoDB client libraries
  • Cassandra: Compatible with Apache Cassandra tools
  • Gremlin: For graph databases and queries
  • Table: Azure Table Storage API
Important consideration:

You must choose your API when creating the account. While Cosmos DB is multi-model, individual accounts are dedicated to a particular API type and cannot be changed later.

Code examples using SQL API

Using the SQL API with JavaScript

First, install the SDK:

npm install @azure/cosmos

Adding an item to a container:

const { CosmosClient } = require("@azure/cosmos");

// Initialize the client
const endpoint = "https://your-account.documents.azure.com";
const key = "your-account-key";
const client = new CosmosClient({ endpoint, key });

async function addItem() {
  const { database } = await client.databases.createIfNotExists({ id: "mydb" });
  const { container } = await database.containers.createIfNotExists({ id: "items" });
  
  // Add an item to the container
  const newItem = {
    id: "1",
    category: "personal",
    name: "Running shoes",
    price: 99.99,
    active: true
  };
  
  const { resource } = await container.items.create(newItem);
  console.log(`Added item: ${resource.id}`);
}

addItem();

Querying items with SQL syntax:

async function queryItems() {
  const querySpec = {
    query: "SELECT * FROM c WHERE c.category = @category",
    parameters: [
      {
        name: "@category",
        value: "personal"
      }
    ]
  };
  
  const { resources } = await container.items
    .query(querySpec)
    .fetchAll();
    
  console.log(`Found ${resources.length} items`);
  console.log(resources);
}

queryItems();

Cloud Functions Free Tier

Firebase Cloud Functions

Google

Serverless functions triggered by events with the Spark Plan.

  • 2 million invocations/month
  • 400,000 GB-seconds compute time
  • 5 GB outbound networking
What are Cloud Functions and why do I need them?

What are Cloud Functions?

Cloud Functions are small pieces of code that run in the cloud whenever they're needed, then shut down when they're done. They're a way to add backend functionality without managing a whole server.

What's the big deal?

Before cloud functions, if you wanted code to run on a server, you had to rent an entire server that ran 24/7, even when no one was using your app. Cloud functions only run when triggered, so you only pay for what you use!

When would I use this?

Use Cloud Functions when your website or app needs to do something in the background that can't happen in the browser, like:

  • Processing form submissions or payments
  • Sending emails or notifications
  • Running scheduled tasks (like daily reports)
  • Connecting to other services that require secret API keys
  • Processing images or other data
How Cloud Functions work (with simple examples)

How Cloud Functions Work

You write a small piece of code (a "function") that does one specific task. This function can be triggered by different events:

  • HTTP triggers: Run when someone visits a specific URL
  • Database triggers: Run when data changes in your database
  • Auth triggers: Run when users sign up or log in
  • Schedule triggers: Run at specific times (like cron jobs)

Simple Examples

An HTTP trigger function (responds to web requests):

exports.helloWorld = functions.https.onRequest((request, response) => {
  response.send("Hello from Firebase!");
});
// This creates a URL like: https://us-central1-yourproject.cloudfunctions.net/helloWorld
                                    

A database trigger function (runs when data changes):

exports.welcomeNewUser = functions.firestore
  .document('users/{userId}')
  .onCreate((snap, context) => {
    const newUser = snap.data();
    console.log(`New user: ${newUser.name}`);
    // Could send welcome email, create default data, etc.
  });
                                    

An auth trigger function (runs when users sign up):

exports.sendWelcomeEmail = functions.auth.user()
  .onCreate((user) => {
    // Send a welcome email to the new user
    return sendEmail(user.email, "Welcome to our app!");
  });
                                    
Real-world examples for beginners

Practical Examples for Beginners

1. Contact Form Handler

  • User submits a contact form on your website
  • Cloud function receives the form data
  • Function sends an email to you with the message
  • Function sends a confirmation email to the user
  • Function saves the message to your database

2. Image Processing

  • User uploads a profile picture
  • Cloud function detects the new image
  • Function creates different sized versions (thumbnail, medium, etc.)
  • Function updates the database with new image URLs

3. Scheduled Cleanup

  • Cloud function runs every night at 2 AM
  • Function finds old temporary data in your database
  • Function deletes data that's no longer needed
Important to understand:

"Serverless" doesn't mean there's no server - it means you don't have to manage the server. You only pay for the exact time your function is running, making it very cost-effective for most beginners and small projects.

Getting started with Firebase Cloud Functions

Setting Up Firebase Cloud Functions

  1. Install Node.js on your computer if you haven't already
  2. Install the Firebase CLI: npm install -g firebase-tools
  3. Login to Firebase: firebase login
  4. Initialize your project: firebase init functions
  5. Write your functions in the generated functions/index.js file
  6. Deploy your functions: firebase deploy --only functions
Getting Help:

Cloud Functions use JavaScript/Node.js, so you can leverage the huge ecosystem of npm packages. If you need to do something specific (like sending emails or processing images), there's likely already a package that makes it easy!

GitHub Actions

Can be used for simple serverless automation.

  • Unlimited minutes for public repos
  • 2,000 minutes/month for private repos
  • CI/CD pipeline integration
What are GitHub Actions?

GitHub Actions Explained

GitHub Actions is a workflow automation platform built into GitHub repositories. While primarily designed for CI/CD (continuous integration and continuous deployment), it can also function as a simple serverless platform for scheduled tasks, event handling, and more.

When to use GitHub Actions

Consider GitHub Actions when you need:

  • Automation triggered by GitHub events (commits, PRs, issues)
  • Scheduled jobs to run at specific times
  • Workflow automation integrated with your code repository
  • Simple API endpoints via repository_dispatch events
  • Build, test, and deployment automation
Free tier benefits:

For public repositories, GitHub Actions provides unlimited compute minutes, making it a very generous offering compared to other serverless platforms that strictly limit free invocations.

GitHub Actions as a serverless platform

Serverless Use Cases

Though not a traditional serverless platform, GitHub Actions can be used for:

  • Scheduled tasks: Generate reports, clean up data, send notifications
  • Webhooks: Process incoming webhooks via repository_dispatch
  • Data processing: Transform data, generate files, create visualizations
  • Automation: Interact with external APIs and services

Example: Scheduled API Check

This workflow checks a health endpoint every hour and creates an issue if it's down:

name: API Monitor

on:
  schedule:
    - cron: '0 * * * *'  # Run hourly

jobs:
  check-api:
    runs-on: ubuntu-latest
    steps:
      - name: Check API health
        id: health-check
        run: |
          RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" https://api.example.com/health)
          echo "API responded with: $RESPONSE"
          if [ "$RESPONSE" -ne 200 ]; then
            echo "API is down!"
            echo "::set-output name=status::down"
          else
            echo "API is up!"
            echo "::set-output name=status::up"
          fi
          
      - name: Create issue if API is down
        if: steps.health-check.outputs.status == 'down'
        uses: actions/github-script@v5
        with:
          github-token: ${{ secrets.GITHUB_TOKEN }}
          script: |
            github.issues.create({
              owner: context.repo.owner,
              repo: context.repo.repo,
              title: '🔴 API is down!',
              body: 'The health check failed at ${new Date().toISOString()}'
            })
                                    
Getting started with GitHub Actions

Basic Setup Steps

  1. Have a GitHub repository (create one if needed)
  2. Click "Actions" tab in your repository
  3. Choose a workflow template or create a custom one
  4. Add YAML configuration for your workflow
  5. Commit the workflow file to the .github/workflows directory
  6. Actions will run automatically based on your triggers
Important limitations:

GitHub Actions has timeout limits (6 hours max), memory constraints, and ephemeral storage. While powerful, it's not designed for long-running services or heavy computational workloads. For those use cases, a dedicated serverless platform like AWS Lambda or Azure Functions would be more appropriate.

AWS Lambda

Amazon

Run backend code on-demand with Amazon's Lambda service.

  • 1 million invocations/month
  • 400,000 GB-seconds compute time/month
  • Multiple language runtimes
What is AWS Lambda and why use it?

What is AWS Lambda?

AWS Lambda is Amazon's serverless computing service that lets you run code without provisioning or managing servers. You only pay for the compute time you consume - there is no charge when your code is not running.

When would I use this?

Lambda is perfect for scenarios where you need to perform processing in response to events or handle backend tasks without running a full server. Great for:

  • API backends for web and mobile applications
  • Automated data processing (like image resizing or data transformation)
  • Scheduled tasks and background processes
  • Real-time file processing or stream processing

How it works:

You upload your code as a "Lambda function," and AWS handles everything required to run and scale your code with high availability. Your functions execute when triggered by events from other AWS services, HTTP requests via API Gateway, or on a schedule.

Why Lambda is revolutionary:

You never have to think about servers or infrastructure - you just write code that responds to events. You're billed only for the time your code runs, down to the millisecond, making it extremely cost-effective for occasional tasks.

Getting started with AWS Lambda

Setting Up Your First Lambda Function

  1. Create a free AWS account if you don't have one
  2. Navigate to the Lambda service in the AWS console
  3. Click "Create function" and start with a blueprint or from scratch
  4. Choose your runtime (Node.js, Python, Java, Go, etc.)
  5. Write or upload your function code
  6. Configure a trigger (API Gateway, S3, scheduled event, etc.)

Example Lambda Function (Node.js)

exports.handler = async (event) => {
    // Log the event argument for debugging and for use in local development
    console.log(JSON.stringify(event, undefined, 2));
    
    // Process the event
    const name = event.queryStringParameters?.name || 'World';
    
    // Create response
    const response = {
        statusCode: 200,
        headers: {
            "Content-Type": "application/json"
        },
        body: JSON.stringify({
            message: `Hello, ${name}!`,
            timestamp: new Date().toISOString()
        }),
    };
    
    return response;
};
Important to understand:

Lambda functions have execution time limits (15 minutes max) and memory allocation limits. They're perfect for short, focused tasks but not for long-running processes or applications that need to maintain state between requests.

Common Lambda use cases and architecture

Popular Lambda Architectures

1. Serverless API

Combine AWS API Gateway with Lambda to create a fully serverless REST API:

Client → API Gateway → Lambda Function → DynamoDB
                                    ↑                                  ↓
                                    └──────────────────────────────────┘
                                         Response returned to client
2. Event-driven processing

Process file uploads automatically:

File uploaded → S3 Bucket → Event Trigger → Lambda Function
                                                                  → Processed result saved to S3 or database
Real-world example:

A photo sharing app could use Lambda to automatically resize images when they're uploaded to S3, generate thumbnails, and store metadata in DynamoDB - all without any servers to manage!

Lambda vs. Traditional Servers

Scaling Automatic and instantaneous - handles traffic spikes without configuration
Cost Pay only for what you use, down to 1ms increments
Management No server provisioning, patching, or maintenance
Limitations 15-minute max execution, startup latency for infrequent functions

Cloudflare Workers

Deploy JavaScript/TypeScript functions at Cloudflare's edge.

  • 100,000 requests/day
  • Up to 10ms CPU time per request
  • Global edge deployment
What are Cloudflare Workers and why use them?

What are Cloudflare Workers?

Cloudflare Workers let you run JavaScript/TypeScript code on Cloudflare's global network of data centers - closer to your users than traditional cloud services. Your code runs at "the edge" of the internet, making it incredibly fast.

When would I use this?

Workers are ideal when you need maximum performance and global presence. Great for:

  • API endpoints that need to be lightning-fast globally
  • Customizing website behavior without changing your origin server
  • Handling traffic spikes with ultra-low latency
  • Creating microservices that need global distribution

How it works:

You write a small piece of JavaScript/TypeScript that runs whenever someone makes a request to your Worker's URL. Your code runs in a V8 isolate (the same engine that powers Chrome) and can respond directly to requests without going back to a central server.

What makes Workers special:

Your code runs in over 200 cities worldwide instead of just a few cloud regions. This can make your application 30-60% faster globally than traditional cloud functions, with almost no cold starts!

Getting started with Cloudflare Workers

Setting Up Your First Worker

  1. Create a free Cloudflare account
  2. Go to Workers & Pages in the dashboard
  3. Install Wrangler CLI: npm install -g wrangler
  4. Login with: wrangler login
  5. Initialize a new project: wrangler init my-worker
  6. Deploy with: wrangler deploy

Example Cloudflare Worker

// Basic Hello World API
export default {
  async fetch(request, env, ctx) {
    const url = new URL(request.url);
    const name = url.searchParams.get('name') || 'World';
    
    return new Response(JSON.stringify({
      message: `Hello ${name}!`,
      location: request.cf?.city || 'Unknown City',  // Shows the city where the code is running
      timestamp: new Date().toISOString()
    }), {
      headers: {
        'Content-Type': 'application/json',
        'Access-Control-Allow-Origin': '*' // Allow any website to call this API
      }
    });
  }
};
Important limitations:

Workers have a CPU time limit of 10ms on the free plan, which is plenty for most API responses and transformations, but not for heavy computation. They also lack persistent disk storage (though you can use Cloudflare's KV, R2, or D1 services for storage).

Common Workers use cases and patterns

Popular Worker Patterns

1. API Middleware

Transform, enhance or validate API requests before they reach your main server:

// Apply rate limiting and add authorization to any API
export default {
  async fetch(request, env, ctx) {
    // Check if user is rate limited
    const ip = request.headers.get('CF-Connecting-IP');
    const rateLimited = await checkRateLimit(ip, env);
    if (rateLimited) {
      return new Response('Too many requests', { status: 429 });
    }
    
    // Add authentication headers
    const modified = new Request(request);
    modified.headers.set('X-Api-Key', env.API_KEY);
    
    // Forward to origin API
    return fetch('https://my-origin-api.example.com', modified);
  }
};
2. Edge Cache/CDN customization

Modify content on the fly without changing your website:

// Modify HTML responses to add a banner
export default {
  async fetch(request, env, ctx) {
    // Get the original response from your website
    const response = await fetch(request);
    const contentType = response.headers.get('content-type');
    
    // Only process HTML
    if (contentType?.includes('text/html')) {
      const originalText = await response.text();
      const modified = originalText.replace(
        '</body>',
        '<div style="background:#f8d7da;padding:10px;text-align:center">Special announcement!</div></body>'
      );
      
      return new Response(modified, response);
    }
    
    return response;
  }
};
Real-world example:

A news website could use Workers to personalize content for each visitor (like showing local weather), A/B test design changes, or handle traffic spikes - all without changing its main website code.

Workers vs. Traditional Cloud Functions

Startup Time Near-instant - practically no cold starts
Global Presence 200+ cities worldwide vs 20-30 regions
Execution Time Shorter (10ms free tier) but sufficient for most web tasks
Use Case Best for web-focused, request-response scenarios

Azure Functions

Microsoft

Microsoft's event-driven serverless compute platform.

  • 1 million executions/month
  • 400,000 GB-seconds/month
  • Multiple language support
  • Integrated with Azure services
What are Azure Functions and why use them?

What are Azure Functions?

Azure Functions is Microsoft's serverless computing service that allows you to run small pieces of code (called "functions") without worrying about application infrastructure. Your functions are triggered by specific events in Azure or external sources.

When would I use this?

Azure Functions are excellent for event-driven scenarios and integration tasks. Ideal for:

  • Processing data or files when they're uploaded to storage
  • Responding to database changes
  • Building REST APIs without managing servers
  • Scheduled tasks (like daily data processing)
  • Real-time stream processing

How it works:

You create a function using your preferred programming language (C#, JavaScript, Python, Java, etc.) that performs a specific task. Configure what should trigger this function (HTTP request, timer, database change, etc.), and Azure automatically runs your code when that trigger occurs.

What makes Azure Functions different:

Azure Functions has deep integration with the entire Azure ecosystem, making it particularly strong for .NET developers or teams already using other Microsoft services. The development experience is smooth with strong Visual Studio integration and robust debugging capabilities.

Getting started with Azure Functions

Creating Your First Azure Function

  1. Create a free Azure account if you don't have one
  2. Go to the Azure Portal and create a new Function App
  3. Choose your runtime stack (Node.js, .NET, Python, etc.)
  4. Create a new function with an HTTP trigger template
  5. Write or modify the function code
  6. Test directly in the portal

Example Azure Function (JavaScript)

// HTTP-triggered function
module.exports = async function (context, req) {
    context.log('JavaScript HTTP trigger function processed a request.');

    const name = (req.query.name || (req.body && req.body.name) || 'World');
    
    context.res = {
        // status defaults to 200
        body: {
            message: `Hello, ${name}!`,
            timestamp: new Date().toISOString()
        },
        headers: {
            'Content-Type': 'application/json'
        }
    };
}
Development options:

Unlike some platforms, Azure Functions gives you multiple development options: 1) Code directly in the Azure Portal, 2) Develop locally using Visual Studio, VS Code, or any editor, 3) Use command-line tools. This flexibility makes it approachable for both beginners and professional developers.

Azure Functions architecture and common use cases

Trigger Types and Bindings

What makes Azure Functions powerful is the variety of triggers and bindings:

HTTP Trigger Run code when an HTTP request is received (for APIs)
Timer Trigger Run code on a schedule (like cron jobs)
Blob Trigger Execute when files are added to Azure Storage
Queue Trigger Run when messages are added to a queue
Cosmos DB Trigger Execute when documents in a database change

Complete Example: Image Processing

This example resizes images when they're uploaded to blob storage:

// This function is triggered when an image is uploaded to the "images" container
// It automatically creates a thumbnail in the "thumbnails" container
module.exports = async function(context, myBlob) {
    context.log("Processing blob: ", context.bindingData.name);
    
    // Get file data from the binding
    const imageBuffer = Buffer.from(myBlob);
    
    // Resize the image (using a hypothetical image processing library)
    // In a real function, you'd use sharp, jimp, or another image library
    const thumbnailBuffer = await resizeImage(imageBuffer, 200, 200);
    
    // The output binding will automatically upload this to the thumbnails container
    context.bindings.thumbnail = thumbnailBuffer;
    
    context.log("Created thumbnail for: ", context.bindingData.name);
};

Azure Functions vs. Other Serverless Platforms

Language Support Excellent support for C#/.NET, JavaScript, Python, Java, PowerShell
Development Experience Strong IDE integration with Visual Studio and VS Code
Integration Deep integration with other Azure services (Logic Apps, Event Grid)
Cold Start Can be mitigated with premium plan (but costs extra)
Real-world example:

A photo sharing app could use Azure Functions to automatically process uploaded images - resizing them for different devices, extracting metadata, detecting inappropriate content with Azure AI, and storing information in Cosmos DB.

Authentication Free Tier

Firebase Authentication

Google

Turnkey user authentication by Google Firebase.

  • Unlimited users
  • Email/Password authentication
  • OAuth providers (Google, Facebook, GitHub, etc.)
What is Authentication and why is it important?

What is Authentication?

Authentication is how users sign up and log in to your app or website. It verifies that users are who they claim to be. Firebase Authentication handles all the complicated security stuff so you don't have to build it yourself.

Why this matters so much:

Building secure authentication from scratch is extremely difficult and dangerous. Even experienced developers often make serious mistakes that can lead to data breaches. Using Firebase Auth means you get a secure, well-tested system that protects your users.

When would I use this?

Use Firebase Authentication when your app needs users to:

  • Create accounts and log in
  • Have personalized experiences (like saving preferences)
  • Access content that's just for them
  • "Sign in with Google" or other social logins
  • Have their data protected
How Firebase Authentication works

How It Works

Instead of storing passwords yourself (which is very risky!), Firebase handles the entire login process. You just add a few lines of code to your app, and Firebase takes care of the rest.

Firebase Authentication provides:

  • Email/Password auth: Traditional email sign-up
  • Social providers: "Sign in with Google/Facebook/Twitter/GitHub"
  • Phone auth: Sign in with a text message code
  • Anonymous auth: Let users try your app before signing up
Real-world comparison:

Building your own authentication is like trying to build your own bank vault - it's complex and risky. Firebase Auth is like getting a pre-built, industry-standard vault that's already been thoroughly tested by security experts.

Simple Examples

Adding "Sign in with Google" button:

// When user clicks the sign-in button
const provider = new firebase.auth.GoogleAuthProvider();
firebase.auth().signInWithPopup(provider)
  .then((result) => {
    // User signed in successfully
    const user = result.user;
    console.log("Signed in user:", user.displayName);
  })
  .catch((error) => {
    // Handle errors
    console.error("Sign-in error:", error);
  });
                                    

Email/Password sign-up:

// When user submits a registration form
firebase.auth().createUserWithEmailAndPassword(email, password)
  .then((userCredential) => {
    // User account created successfully
    const user = userCredential.user;
    console.log("New user created:", user.email);
  })
  .catch((error) => {
    // Handle errors like "email already in use"
    console.error("Sign-up error:", error);
  });
                                    
Getting started with Firebase Authentication

Setting Up Firebase Authentication

  1. Create a Firebase project at firebase.google.com
  2. In the Firebase console, go to "Authentication" > "Sign-in method"
  3. Enable the sign-in providers you want (Email/Password, Google, etc.)
  4. Add the Firebase SDK to your project
  5. Add sign-in buttons and forms to your web app

Best Practices for Beginners

  • Start with Google Sign-in - it's the easiest to implement
  • Always check if a user is signed in when your app loads
  • Use onAuthStateChanged to detect when login status changes
  • Connect authentication with your database security rules
  • Test all authentication flows thoroughly
Building a complete login system:

A complete login system with Firebase might include:

  • Sign up/login forms
  • Social login buttons
  • Password reset flow
  • Email verification
  • User profile management
  • Protected routes (pages only logged-in users can see)

All of this is much easier with Firebase than building it yourself!

AWS Cognito

Amazon

Authentication service that includes various sign-in options.

  • 50,000 free monthly active users
  • Email, phone, or OAuth providers
  • Multi-factor authentication
What is AWS Cognito and why use it?

What is AWS Cognito?

AWS Cognito is Amazon's service for adding user sign-up, sign-in, and access control to your web and mobile apps. It scales to millions of users and supports sign-in with social identity providers like Google, Facebook, Amazon, and enterprise identity providers via SAML 2.0.

When would I use this?

Cognito is ideal when you need comprehensive user authentication for your applications. Perfect for:

  • Adding secure user authentication to mobile and web apps
  • Supporting social login (Google, Facebook, Apple, etc.)
  • Managing user profiles and preferences
  • Implementing multi-factor authentication (MFA)
  • Building applications that need to be SOC, HIPAA, or PCI compliant

How it works:

Cognito has two main components:

  • User Pools: User directories that provide sign-up and sign-in options for your app users
  • Identity Pools: Grant your users access to AWS services (like S3, DynamoDB) after they've authenticated
What makes Cognito powerful:

It handles all the complex security aspects of authentication while giving you full control over the user experience. It seamlessly integrates with other AWS services, making it ideal if you're already using AWS for your application's backend.

Getting started with AWS Cognito

Setting up AWS Cognito User Pools

  1. Create a free AWS account if you don't have one
  2. Navigate to the Cognito service in the AWS console
  3. Click "Create a user pool"
  4. Configure sign-in options (email, phone, username)
  5. Set up security requirements (password policy, MFA)
  6. Configure app clients and analytics
  7. Set up the hosted UI (optional) or use the SDK in your app

Example: Adding Cognito to a Web App (JavaScript)

// First, install the AWS Amplify library
// npm install aws-amplify

// Configure Amplify
import { Amplify } from 'aws-amplify';

Amplify.configure({
    Auth: {
        region: 'us-east-1',
        userPoolId: 'us-east-1_xxxxxxxxx',
        userPoolWebClientId: 'xxxxxxxxxxxxxxxxxxxxxxxxxx',
    }
});

// Sign-up a new user
import { Auth } from 'aws-amplify';

async function signUp(username, password, email) {
    try {
        const { user } = await Auth.signUp({
            username,
            password,
            attributes: {
                email,
            }
        });
        console.log('Sign-up success!', user);
    } catch (error) {
        console.log('Error signing up:', error);
    }
}

// Sign-in a user
async function signIn(username, password) {
    try {
        const user = await Auth.signIn(username, password);
        console.log('Sign-in success!', user);
    } catch (error) {
        console.log('Error signing in:', error);
    }
}
Important considerations:

While Cognito handles the authentication complexities, you'll need to manage user authorization (what authenticated users are allowed to do) in your application. It's also important to keep security in mind when implementing authentication flows.

Common Cognito architectures and use cases

Typical Cognito Architecture

1. Web Application Authentication Flow

Here's how a typical authentication flow works with Cognito:

User Website Sign-in Form Cognito User Pool JWT Token Application

└─────── Access Protected Resources ─────┘
2. Social Identity Integration

Cognito can federate with social identity providers:

User → "Login with Google" → Google Auth → Cognito User Pool → Application
                                                                                       ↓
                                            User Profile Created/Updated with Google Info
Real-world example:

A SaaS application might use Cognito to handle user registration and login, allow social sign-in with Google and GitHub, enforce strong passwords and MFA for security, and use Cognito's hosted UI for a professional login experience without building custom login screens.

Cognito vs. Other Auth Solutions

Cost Model Pay per MAU (Monthly Active User) with generous free tier
AWS Integration Seamlessly integrates with AWS services like IAM, S3, API Gateway
Customization Highly customizable but requires more setup than some specialized auth providers
MFA Options SMS, TOTP, email, and custom auth flows

Cloudflare Access

Secure authentication for apps using Cloudflare Zero Trust.

  • Free for up to 50 users
  • Zero Trust security model
  • Multiple authentication methods
What is Cloudflare Access and why use it?

What is Cloudflare Access?

Cloudflare Access is a Zero Trust security solution that acts like a smart bouncer for your applications and internal resources. Instead of using a VPN, Access checks each request to your applications to verify identity and permissions before allowing users in.

When would I use this?

Access is ideal when you need to secure applications or internal tools without the complexity of a VPN. Perfect for:

  • Securing internal tools and dashboards
  • Protecting development or staging environments
  • Controlling access to client portals
  • Replacing complex VPN setups
  • Securing SaaS applications with an additional layer of protection

How it works:

Cloudflare Access sits in front of your applications and requires users to authenticate before they can reach your resources. It integrates with identity providers like Google, GitHub, Microsoft, and others to verify users, then applies policies to determine what they can access.

What makes Access special:

Unlike traditional security that depends on network location (like VPNs), Access follows Zero Trust principles - "never trust, always verify." This means every request is fully authenticated and authorized, regardless of where it comes from, making your applications more secure without adding friction for legitimate users.

Getting started with Cloudflare Access

Setting Up Cloudflare Access

  1. Create a free Cloudflare account if you don't have one
  2. Navigate to the Zero Trust dashboard from your account
  3. Complete the Zero Trust onboarding
  4. Set up your first identity provider (e.g., Google, GitHub, or One-Time Pin)
  5. Create an Access application to protect a specific resource
  6. Define access policies (who can reach this application)
  7. Test access to your protected resource

Example: Basic Access Policy

Here's what a simple Access policy might look like in the Cloudflare dashboard:

Application: development-dashboard.example.com
Policy Name: Development Team Access
Include rule:
  - Emails ending in @mycompany.com
  - AND Member of "Development" group in Google Workspace
Authentication required: Yes
Session duration: 24 hours
Important to understand:

Access doesn't replace application-level authentication. It adds a security layer before users reach your app. For complete security, you should still maintain proper authentication within your applications while using Access as your first line of defense.

Common Access use cases and integrations

Popular Access Use Cases

1. Secure Internal Tools

Protect company dashboards, admin panels, and internal tools without a VPN:

Internet → Cloudflare Access → Authentication Check → Internal Dashboard
   |
   ↓
Unauthorized users stopped by Access
2. Developer Environment Protection

Secure staging environments while allowing client previews:

Developer with GitHub access → Cloudflare Access → Staging Environment
Client with approved email    →      |
                                     → Permitted based on identity
Non-authorized visitors      →       |
                                     → Blocked
Real-world example:

A web development agency could use Access to protect client project previews, giving each client secure access to only their project's staging site through their existing Google or Microsoft account - no new login credentials needed!

Access vs. Traditional Security Solutions

Traditional VPN Provides network-level access based on connection; Access provides application-level control based on identity
Basic Authentication Basic auth uses simple credentials; Access integrates with enterprise identity providers for stronger security
IP Allow Lists IP restrictions break with remote work; Access follows users' identities wherever they connect from
Implementation Much faster to deploy (minutes vs. days/weeks) with no client software needed

Azure AD B2C

Microsoft

Microsoft's identity management service for consumer-facing applications.

  • 50,000 monthly active users
  • Social identity providers (Facebook, Google, etc.)
  • Customizable login experiences
  • Multi-factor authentication
What is Azure AD B2C and why use it?

What is Azure AD B2C?

Azure Active Directory B2C (Business-to-Consumer) is Microsoft's customer identity and access management (CIAM) solution. It allows your applications to securely authenticate and manage users from any identity provider, including social networks, enterprise directories, or with local accounts specific to your app.

When would I use this?

Azure AD B2C is ideal for applications that need to handle customer/consumer authentication with a high degree of customization and security. Great for:

  • Consumer-facing web and mobile apps
  • E-commerce sites that need custom registration flows
  • Applications requiring social login options (Google, Facebook, etc.)
  • Applications that need to comply with regulations like GDPR
  • Apps where you want to fully customize the login experience

How it works:

Azure AD B2C serves as an intermediary between your application and various identity providers. When a user attempts to log in, Azure AD B2C presents a customizable login page and handles the authentication process with the user's chosen identity provider. It then returns secure tokens to your application after the user is authenticated.

What makes Azure AD B2C different:

Unlike many authentication solutions, Azure AD B2C gives you complete control over the look and feel of the login experience. You can deeply customize the user interface to match your brand, and even implement complex user journeys like progressive profiling (collecting user information gradually over time).

Getting started with Azure AD B2C

Setting Up Azure AD B2C

  1. Create a free Azure account if you don't have one
  2. Create an Azure AD B2C tenant (a dedicated instance of Azure AD)
  3. Register your application in the B2C tenant
  4. Create user flows (sign-up/sign-in journeys)
  5. Configure identity providers (local accounts, Google, Facebook, etc.)
  6. Customize the UI to match your brand (optional)
  7. Integrate authentication in your application

Example: Integrating B2C with JavaScript

// Using MSAL.js (Microsoft Authentication Library)
// npm install @azure/msal-browser

import { PublicClientApplication, InteractionType } from '@azure/msal-browser';

// Configure MSAL
const msalConfig = {
  auth: {
    clientId: 'your-application-id',
    authority: 'https://your-tenant.b2clogin.com/your-tenant.onmicrosoft.com/B2C_1_signupsignin1',
    knownAuthorities: ['your-tenant.b2clogin.com'],
    redirectUri: 'https://your-app.com/auth',
  }
};

const msalInstance = new PublicClientApplication(msalConfig);

// Login function
async function login() {
  try {
    const loginRequest = {
      scopes: ["openid", "profile"],
    };
    
    // Redirect to the B2C sign-in page
    await msalInstance.loginRedirect(loginRequest);
  } catch (error) {
    console.log(error);
  }
}

// Process the auth response
msalInstance.handleRedirectPromise()
  .then(response => {
    if (response) {
      // User successfully logged in
      console.log('Logged in user:', response.account);
      
      // Get user details from ID token claims
      const idTokenClaims = response.idTokenClaims;
      
      // Store authentication state
      localStorage.setItem('isAuthenticated', 'true');
    }
  })
  .catch(error => {
    console.error('Login failed:', error);
  });
Important to understand:

Azure AD B2C has a more complex initial setup than some authentication solutions, but this complexity gives you more control and customization options. For simple applications, you might start with the built-in user flows, while more complex requirements might need custom policies.

Azure AD B2C features and customization

Key Features and Capabilities

1. User Flows vs. Custom Policies

Azure AD B2C offers two ways to define authentication journeys:

  • User Flows: Pre-built, configurable policies for common scenarios (simpler)
  • Custom Policies: Advanced, XML-based policies for complex scenarios (more powerful)
2. Customization Options
UI Customization Control colors, logos, and layout of sign-in pages
Language Customization Support multiple languages with customized messaging
Custom Attributes Collect and store additional user information
Identity Providers Connect to Microsoft, Google, Facebook, GitHub, Twitter, and custom OIDC/SAML providers
Real-world example:

An e-commerce website might use Azure AD B2C to offer users login options through their existing Google or Facebook accounts, with a fully branded login experience matching the store's design. The sign-up process could collect essential user information while enabling password-less authentication for returning customers.

B2C vs. Other Auth Solutions

Scalability Designed for large-scale consumer applications (millions of users)
Customization Extremely customizable UI and user journeys
Complexity More complex initial setup but offers greater control
Integration Deep integration with other Azure services and Microsoft ecosystem

AI Services Free Tier

Firebase ML Kit

Google

Pre-trained machine learning APIs for text recognition, image labeling, and translation.

  • Unlimited on-device ML
  • 1,000 cloud translations/day
  • Vision, text and language APIs
Easy integration with Firebase apps
What is Firebase ML Kit and why use it?

What is Firebase ML Kit?

Firebase ML Kit is a set of pre-trained machine learning tools that allow you to add powerful AI features to your mobile and web apps without needing to be a machine learning expert. It provides both on-device and cloud-based APIs for common machine learning tasks.

When would I use this?

ML Kit is perfect when you want to add AI capabilities to your apps without the complexity of building models from scratch. Great for:

  • Scanning and processing text from images (like receipts or business cards)
  • Detecting and recognizing faces in photos or real-time camera
  • Identifying objects and scenes in images
  • Translating text between languages
  • Reading and processing barcodes or QR codes

How it works:

ML Kit offers two types of processing:

  • On-device APIs: Process data directly on the user's device without an internet connection. Great for privacy and real-time processing, but with slightly less accuracy.
  • Cloud APIs: Send data to Google's servers for processing with more powerful models. Better accuracy but requires internet connection.
What makes ML Kit special:

Unlike many AI services, ML Kit is specifically designed for mobile apps and works seamlessly with other Firebase services. The on-device processing means your app can work without internet connection and provides immediate results with no API costs or privacy concerns.

Getting started with Firebase ML Kit

Adding ML Kit to Your App

  1. Create a Firebase project and add your app (Android, iOS, or Web)
  2. Install the Firebase SDK and ML Kit libraries
  3. Choose the ML features you want to use
  4. Initialize Firebase in your app
  5. Start using ML Kit APIs in your code

Example: Text Recognition in Android

// First, add dependencies to your build.gradle
// implementation 'com.google.firebase:firebase-ml-vision:24.1.0'

// Process an image with the text recognizer
private fun recognizeText(imageUri: Uri) {
    // Get the image
    val image = FirebaseVisionImage.fromFilePath(context, imageUri)
    
    // Get an instance of FirebaseVisionTextRecognizer
    val recognizer = FirebaseVision.getInstance()
        .onDeviceTextRecognizer
    
    // Process the image
    recognizer.processImage(image)
        .addOnSuccessListener { firebaseVisionText ->
            // Task completed successfully
            val text = firebaseVisionText.text
            println("Recognized text: $text")
            
            // Process text blocks, lines, elements
            for (block in firebaseVisionText.textBlocks) {
                val blockText = block.text
                val blockCornerPoints = block.cornerPoints
                val blockFrame = block.boundingBox
                
                for (line in block.lines) {
                    val lineText = line.text
                    // Process each line...
                }
            }
        }
        .addOnFailureListener { e ->
            // Task failed with an exception
            println("Text recognition failed: ${e.message}")
        }
}
Choosing between on-device and cloud:

On-device processing works offline, is faster, and has no usage limits, but it may not be as accurate as cloud-based models. Cloud processing requires internet but provides better results for complex tasks. Many developers use on-device for real-time tasks and cloud for non-urgent processing that needs higher accuracy.

ML Kit capabilities and real-world applications

Key ML Kit Capabilities

Text Recognition Recognize and extract text from images (both Latin and non-Latin scripts)
Face Detection Detect faces, facial landmarks, and even recognize smiles
Image Labeling Identify objects, places, activities, animals, products in images
Barcode Scanning Read and process multiple barcode formats including QR
Language ID Identify the language of text (supports 100+ languages)
Translation Translate text between 58 languages (both on-device and cloud)

Advanced Features (Cloud-based)

  • Landmark Recognition: Identify famous landmarks in photos
  • Smart Reply: Suggest contextual responses to messages
  • Custom Models: Deploy your own TensorFlow Lite models
Real-world examples:

A restaurant review app could use image labeling to automatically categorize food photos and text recognition to extract menu items from photos of menus. A travel app could use landmark recognition to identify monuments in user photos, and language identification with translation to help travelers understand foreign signs.

ML Kit vs. Other AI Solutions

Ease of Use Ready-to-use APIs with minimal ML knowledge required
Mobile Focus Optimized for mobile devices with on-device processing
Integration Seamless integration with other Firebase services
Pricing On-device is free and unlimited; cloud features have generous free tiers

AWS Bedrock

Amazon

API access to foundation models from AI providers like Anthropic (Claude), Stability AI and AI21.

  • 750,000 characters of text generation/month
  • 5 GB vector search per month
  • Includes Claude and Stable Diffusion
What is AWS Bedrock and why use it?

What is AWS Bedrock?

AWS Bedrock is Amazon's fully managed service that makes top foundation models (FMs) from leading AI companies available through a unified API. It lets you build and scale generative AI applications using models from Anthropic (Claude), Stability AI (Stable Diffusion), AI21, and Amazon's own models.

When would I use this?

AWS Bedrock is ideal when you want to integrate state-of-the-art AI capabilities into your applications without managing complex infrastructure. Perfect for:

  • Building AI-powered chatbots, assistants, and conversational interfaces
  • Generating images from text descriptions
  • Creating content like articles, summaries, or product descriptions
  • Enhancing search with semantic understanding
  • Analyzing and extracting insights from large text documents

How it works:

Bedrock provides a simple API that connects to various foundation models. You send prompts or requests to these models through the API, and they return the generated text, images, or other outputs. You can use these models as-is or customize them with your own data through fine-tuning or retrieval augmented generation (RAG).

What makes Bedrock special:

Unlike using AI models directly from their creators, Bedrock offers a single interface to access multiple models, integrated security controls, scalable infrastructure, and seamless connections to other AWS services. This means you can switch between different AI models without changing your code, while maintaining enterprise-grade security and compliance.

Getting started with AWS Bedrock

Setting Up AWS Bedrock

  1. Create a free AWS account if you don't have one
  2. Navigate to AWS Bedrock in the AWS console
  3. Request access to the foundation models you want to use
  4. Create an IAM role with Bedrock permissions
  5. Use the AWS SDK or Bedrock console to start making inference requests

Example: Text Generation with Claude (Python)

# Install required packages
# pip install boto3

import boto3
import json

# Initialize Bedrock client
bedrock = boto3.client(
    service_name='bedrock-runtime',
    region_name='us-east-1'  # or your preferred region
)

# Define prompt for Claude
prompt = """
Human: Write a short story about a robot learning to paint.
"""

# Create request payload
request_body = {
    "prompt": prompt,
    "max_tokens_to_sample": 500,
    "temperature": 0.7,
    "top_p": 0.9,
}

# Call Claude model
response = bedrock.invoke_model(
    modelId="anthropic.claude-v2",
    body=json.dumps(request_body)
)

# Parse and print response
response_body = json.loads(response['body'].read())
generated_text = response_body.get('completion')
print(generated_text)
Important to understand:

While the free tier is generous, usage beyond the free limits will incur charges. Set up billing alerts and carefully monitor your usage when working with large-scale applications.

Bedrock features and common use cases

Key Bedrock Capabilities

Text Generation Create content, summaries, and conversational responses with Claude, Titan, and other LLMs
Image Generation Create images from text descriptions using Stable Diffusion models
Fine-tuning Customize models with your own data for specialized tasks and domain-specific knowledge
Guardrails Control model outputs to ensure appropriate content and adherence to policies
Knowledge Bases Enhance model responses with your organization's proprietary information

Advanced Integration

Bedrock works seamlessly with other AWS services:

  • Amazon SageMaker: For ML workflows and custom model training
  • Amazon Kendra: For enterprise search enhancement
  • AWS Lambda: For serverless AI processing
  • Amazon DynamoDB: For vector storage and retrieval
Real-world example:

A company could build a customer support system that uses Claude to analyze support tickets, generate personalized responses, and route issues to the right department. It could use Stable Diffusion to create visual explanations for customers, and AWS's knowledge base feature to ensure the AI has accurate information about the company's products and policies.

Bedrock vs. Direct Model Access

Variety Access to multiple top models through a single API vs. separate integrations
Security Enterprise-grade security with IAM, VPC endpoints, and AWS monitoring tools
Integration Deep integration with AWS ecosystem for data storage, processing, and deployment
Scalability Built to handle enterprise workloads with high availability and throughput

Cloudflare AI Gateway

Proxy service that optimizes AI API requests to reduce cost and improve performance.

  • 100,000 requests per month
  • Request caching & optimization
  • Multiple model support
What is Cloudflare AI Gateway and why use it?

What is Cloudflare AI Gateway?

Cloudflare AI Gateway is a proxy service that sits between your application and AI providers (like OpenAI, Anthropic, etc.). It optimizes, secures, and enhances your AI API requests while reducing costs through advanced caching and request management.

When would I use this?

AI Gateway is perfect when you're already using AI models in your applications and want to improve performance, reduce costs, and add security. Ideal for:

  • Applications that make frequent similar AI requests (to take advantage of caching)
  • Projects that need to reduce AI API costs
  • Applications using multiple AI models or providers
  • Teams that need analytics and monitoring of AI usage
  • Projects requiring enhanced security around AI interactions

How it works:

Instead of your application directly calling AI APIs (like OpenAI), you route these requests through Cloudflare AI Gateway. The Gateway then:

  1. Receives your AI request
  2. Checks if an identical request has been made recently (for caching)
  3. If cached, returns the cached result immediately (saving time and money)
  4. If not cached, forwards the request to the actual AI provider
  5. Caches the result for future similar requests
  6. Provides analytics and monitoring on your AI usage
What makes AI Gateway special:

Most applications call AI APIs directly, which means paying for every request even if they're nearly identical. AI Gateway can reduce costs by up to 80% through caching, while also improving response times by delivering cached results instantly instead of waiting for the AI model to generate them again.

Getting started with Cloudflare AI Gateway

Setting Up AI Gateway

  1. Create a free Cloudflare account if you don't have one
  2. Enable Workers AI in the Cloudflare dashboard
  3. Create an API Token for your AI providers (OpenAI, Anthropic, etc.)
  4. Configure the Gateway with your API tokens
  5. Update your application to route AI requests through the Gateway

Example: Using AI Gateway with OpenAI (JavaScript)

// Before: Direct OpenAI API call
const { Configuration, OpenAIApi } = require("openai");
const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);

// Make direct API call
const response = await openai.createChatCompletion({
  model: "gpt-3.5-turbo",
  messages: [{ role: "user", content: "Hello!" }],
});

// After: Using AI Gateway
const response = await fetch('https://gateway.ai.cloudflare.com/v1/YOUR_ACCOUNT/YOUR_NAMESPACE/openai/chat/completions', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${process.env.CLOUDFLARE_AI_GATEWAY_KEY}`,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    model: "gpt-3.5-turbo",
    messages: [{ role: "user", content: "Hello!" }]
  })
});
Important to understand:

You still need your own API keys from the AI providers (like OpenAI). AI Gateway doesn't give you free access to these models - it optimizes your existing access to make it more efficient and cost-effective.

AI Gateway features and cost benefits

Key AI Gateway Features

Intelligent Caching Stores and reuses responses for similar prompts, dramatically reducing API costs
Multi-provider Support Works with OpenAI, Anthropic, Cohere, and other major AI providers
Analytics Provides usage metrics, cost tracking, and insights into your AI operations
Security Adds enterprise-grade security to your AI API interactions
Customizable Rules Create rules for caching, routing, and security policies

Cost Savings Example

Let's calculate the potential savings with AI Gateway for a chatbot application:

Without AI Gateway:
10,000 user messages/day × 30 days = 300,000 API calls/month
300,000 calls × $0.002/call (GPT-3.5 rate) = $600/month

With AI Gateway (assuming 60% cache hit rate):
300,000 total requests
- 180,000 cached responses (free)
= 120,000 actual API calls
120,000 calls × $0.002/call = $240/month

Total Savings: $360/month (60% reduction)
Real-world example:

A company built a customer service AI assistant that frequently answers similar questions. By implementing AI Gateway, they cached common responses about product information, return policies, and troubleshooting steps. This reduced their OpenAI costs by 73% while making responses faster for their customers.

When to Use AI Gateway

  • High Volume Applications: When you're making many similar AI requests
  • Cost-Sensitive Projects: When you need to optimize your AI spending
  • Enterprise Applications: When you need additional security and monitoring
  • Multi-provider Setups: When you're using multiple AI services and want a unified interface

Azure AI Studio

Microsoft

Microsoft's platform for building, testing, and deploying AI applications, including Azure OpenAI.

  • $500 in free credits for new accounts
  • Access to GPT and other models
  • Easy deployment & integration
What is Azure AI Studio and why use it?

What is Azure AI Studio?

Azure AI Studio is Microsoft's unified platform for building, testing, and deploying AI applications. It provides access to powerful AI models (including OpenAI's GPT and DALL-E), tools for customizing these models, and a complete environment for developing AI solutions from start to finish.

When would I use this?

Azure AI Studio is ideal when you want to create advanced AI applications in a professional, enterprise-ready environment. Perfect for:

  • Building AI-powered chatbots and assistants
  • Creating applications that need to understand and generate human language
  • Developing systems that can analyze documents or extract information from text
  • Generating images, code, or other content with AI
  • Customizing AI models with your own data (through fine-tuning or RAG)

How it works:

Azure AI Studio combines several Microsoft AI technologies into a unified platform:

  1. Model Access: Provides access to Azure OpenAI models (GPT-4, GPT-3.5, DALL-E, etc.) and other AI capabilities
  2. Development Tools: Offers a graphical interface for designing, testing, and deploying AI applications
  3. Data Management: Includes tools for managing datasets used to enhance AI models
  4. Deployment Options: Provides ways to deploy AI solutions as APIs, web apps, or integrations with other services
What makes Azure AI Studio special:

Unlike directly accessing AI models through their providers, Azure AI Studio offers enterprise-grade security, compliance features, and seamless integration with Microsoft's ecosystem. It's designed with a focus on responsible AI use and provides guardrails to help ensure AI applications are built ethically and safely.

Getting started with Azure AI Studio

Setting Up Azure AI Studio

  1. Create a free Azure account (new users get $500 in credits)
  2. Navigate to Azure AI Studio in the Azure portal
  3. Complete the Azure OpenAI Service access application (required for using GPT models)
  4. Create a new project in AI Studio
  5. Select the AI models and resources you want to use
  6. Start building with the playground or programmatic APIs

Example: Using the Azure OpenAI API (Python)

# Install required packages
# pip install azure-openai

import os
import openai

# Set up Azure OpenAI configuration
openai.api_type = "azure"
openai.api_base = "https://YOUR_RESOURCE_NAME.openai.azure.com/"
openai.api_version = "2023-05-15"
openai.api_key = os.getenv("AZURE_OPENAI_API_KEY")

# Define your deployment name (what you named your model deployment in Azure)
deployment_name = 'gpt-4'

# Call the Azure OpenAI model
response = openai.ChatCompletion.create(
    engine=deployment_name,
    messages=[
        {"role": "system", "content": "You are a helpful AI assistant."},
        {"role": "user", "content": "Write a short poem about technology."}
    ],
    temperature=0.7,
    max_tokens=150
)

# Print the response
print(response['choices'][0]['message']['content'])
Important to know:

You'll need to apply for access to Azure OpenAI services before you can use GPT models. This approval process typically takes a few business days. While waiting, you can still explore other AI capabilities in Azure AI Studio.

Azure AI Studio capabilities and advanced features

Key AI Studio Components

Prompt Flow Visual tool for designing and testing complex AI workflows
Model Catalog Access to GPT-4, GPT-3.5, DALL-E, and other advanced models
Vector Search Build RAG applications that can use your own data to enhance AI responses
Safety & Compliance Content filtering, usage monitoring, and enterprise security
Evaluation Tools Test and compare different prompts, models, and configurations

Building with Azure AI Studio

Azure AI Studio offers several approaches to building AI applications:

  • Playground: Web interface for quickly testing prompts and model responses
  • SDK & REST API: Programmatic access for integration into applications
  • Low-Code Tools: Visual interfaces for building without extensive coding
  • Notebook Experience: Jupyter notebooks for data science workflows
Real-world example:

A healthcare company used Azure AI Studio to build a documentation assistant for doctors. They leveraged GPT-4 for natural language understanding, connected it to their medical knowledge base using vector search, and implemented strict privacy controls using Azure's compliance features. The assistant helps summarize patient encounters and generate proper medical coding, saving doctors hours of paperwork each day.

Azure AI Studio vs. Direct Model Access

Enterprise Features Advanced security, compliance (HIPAA, SOC, etc.), and governance
Development Tools Comprehensive environment including testing, monitoring, and deployment
Integration Seamless connection with other Azure services (storage, functions, etc.)
Cost Model Pay-as-you-go pricing with enterprise billing options