Render (https://render.com) is a next-generation cloud application platform that helps teams deploy, secure, and scale everything from hundred-line prototypes to complex multi-service architectures. For link targets in this document that start with a slug, assume the full path is https://render.com/docs/{slug}. For targets that start with a hash, assume an internal link to a section on the same page. Each h1 designates a new page. [dboard]: https://dashboard.render.com # Your First Render Deploy Welcome! Let's get up and running on Render. *This tutorial uses free Render resources—no payment required.* All you need is a GitHub repo with the web app you want to deploy (GitLab and Bitbucket work too). > *Want to deploy an example app using a particular language or framework?* > > Check out our [quickstarts](#quickstarts). ## 1. Sign up Signing up is fast and free: ## 2. Choose a service type To deploy to Render, you create a *service* that pulls, builds, and runs your code. 1. Launch the [Render Dashboard][dboard]. 2. In the top-right corner, open the *+ New* dropdown: [img] Here you select a *service type*. For this tutorial, choose *Web Service* or *Static Site*: | Service type | Description | Common frameworks | |--------|--------|--------| | *Web Service* | Choose this if your web app runs any server-side code. The app also needs to listen for HTTP requests on a port. Full-stack web apps, API servers, and mobile backends are all web services. | Express, Next.js, Fastify, Django, FastAPI, Flask, Rails, Phoenix | | *Static Site* | Choose this if your web app consists entirely of static content (mostly HTML/CSS/JS). Blogs, portfolios, and documentation sets are often (but not _always_) static sites. | Create React App, Vue.js, Hugo, Docusaurus, Next.js [static exports](https://nextjs.org/docs/pages/building-your-application/deploying/static-exports) | You can deploy either of these service types for free on Render. > *Free web services "spin down" after 15 minutes of inactivity.* > > They spin back up when they next receive incoming traffic. Learn more about [free instance limitations.](free) ## 3. Link your repo After you select a service type, the service creation form appears. 1. First, connect your GitHub/GitLab/Bitbucket account to Render: [img] After you connect, the form shows a list of all the repos you have access to: [img] 2. Select the repo that contains your web app and click *Connect*. The rest of the creation form appears. ## 4. Configure deployment Complete the service creation form to define how Render will build and run your app. *Click the tab for your service type to view important field details:* **Web Service** #### Important web service fields | Field | Description | |--------|--------| | *Branch* | Your service only deploys commits on the branch you specify, such as `main`. Render can automatically redeploy your app whenever you push changes to this branch. | | *Root Directory* | Deploying from a monorepo? Specify the subdirectory that represents your application root. Your build and start commands will run from this directory. | | *Language* | If your app's programming language isn't listed in this dropdown, you can still deploy using the `Docker` runtime if you build your app from a `Dockerfile`. | | *Build Command* | This is the command that Render will use to build your app from source. Common examples include: **Node.js** `npm install` You can also use `yarn` or `bun`. **Python** `pip install -r requirements.txt` **Ruby** `bundle install` This usually resembles the command you run locally to install dependencies and perform any necessary compilation. | | **Start Command** | This is the command that Render will use to start your app. Common examples include: **Node.js** `npm start` You can also use `yarn` or `bun`. **Python** `gunicorn your_application.wsgi` **Ruby** `./bin/rails server` For some frameworks, this might differ from the command you run locally to start your app. For example, a Flask app might use `flask run` locally but `gunicorn` for production. | | **Instance Type** | This determines your service's RAM and CPU, along with its cost. Choose the **Free** instance type to deploy for free: [img] | | **Environment Variables** | These will be available to your service at both build time and runtime. If you forget any, you can always add them later and redeploy. | **Static Site** #### Important static site fields | Field | Description | |--------|--------| | **Branch** | Your site only deploys commits on the branch you specify, such as `main`. Render can automatically redeploy whenever you push changes to this branch. | | **Root Directory** | Deploying from a monorepo? Specify the subdirectory that represents your application root. Your build command will run from this directory. | | **Build Command** | This is the command that Render will use to install dependencies and then build your site's static assets. Common examples include: **Next.js / Create React App / Vue.js** `npm install && npm run build` For Next.js, make sure you've set [`output: 'export'`](https://nextjs.org/docs/pages/building-your-application/deploying/static-exports#configuration) in your `next.config.js` file. **Jekyll** `bundle install && bundle exec jekyll build` | | **Publish Directory** | This is the directory containing your site's static assets, which are usually generated by your build command. Common examples include: - `build` (Create React App, Vue.js, etc.) - `out` (Next.js static export) - `_site` (Jekyll) | | **Environment Variables** | These will be available to your service at build time. Additionally, some static site generators substitute environment variable values into your generated static assets. For example: - Create React App performs environment variable substitution for variables prefixed with `REACT_APP_`. - Next.js does the same for variables prefixed with `NEXT_PUBLIC_`. If you forget any, you can always add them later and redeploy. | When you're done, click the **Deploy** button at the bottom of the form. Render kicks off your first deploy. ## 5. Monitor your deploy Render automatically opens a log explorer that shows your deploy's progress: [img] Follow along as the deploy proceeds through your build and start commands. - **If the deploy completes successfully,** the deploy's status updates to **Live** and you'll see log lines like these: ```bash # Web service ==> Deploying... ==> Running 'npm start' # (or your start command) ==> Your service is live 🎉 # Static site ==> Uploading build... ==> Your site is live 🎉 ``` - **If the deploy fails,** the deploy's status updates to **Failed**. Review the log feed to help identify the issue. - Also see [Troubleshooting Your Deploy](troubleshooting-deploys) for common solutions. - After you identify the issue, push a new commit to your linked branch. Render will automatically start a new deploy. ## 6. Open your app After your app deploys successfully, you're ready to view it live. Every Render web service and static site receives a unique `onrender.com` URL. Find this URL on your service's page in the Render Dashboard: [img] Click the URL to open it in your browser. Your service will serve the content for its root path. **Congratulations!** You've deployed your first app on Render 🎉 When you're ready, check out recommended [next steps](#next-steps). ## Next steps ### Connect a datastore Render provides fully managed Postgres and Key Value instances for your data needs. Both provide a Free instance type to help you get started. > **Free Render Postgres databases expire 30 days after creation.** > > You can upgrade to a paid instance at any time to keep your data. Learn more about [free instance limitations.](free) Learn how to create datastores and connect them to your app: - [Render Postgres databases](postgresql-creating-connecting#create-your-database) - [Render Key Value instances](key-value#create-your-key-value-instance) Paid services can also attach a [disk](disks) for persistence of local filesystem data (by default, local filesystem changes are [lost with each deploy](deploys#ephemeral-filesystem)). ### Install the Render CLI The Render CLI helps you manage your Render services right from your terminal. Trigger deploys, view logs, initiate psql sessions, and more. [video] [Get started with the Render CLI.](cli) ### Add a custom domain Each Render web service and static site receives its own `onrender.com` URL. You can also add your own custom domains to these service types. [Learn how.](custom-domains) ### Learn about operational controls Deploying your app is just the beginning. Check out a few of the ways you can manage and monitor your running services on Render: - [Scaling your instance count](scaling) - [Analyzing service metrics](service-metrics) - [Rolling back a deploy](rollbacks) - [Enabling maintenance mode](maintenance-mode) Note that some of these capabilities require running your service on a paid instance type. ### Explore other service types In addition to supporting web services and static sites, Render offers a variety of other service types to support any use case: | Service type | Description | | --------------------------------------------- | ------------------------------------------------------------------------------- | | [*Private services*](private-services) | Run servers that aren't reachable from the public internet. | | [*Background Workers*](background-workers) | Offload long-running and computationally expensive tasks from your web servers. | | [*Cron Jobs*](cronjobs) | Run periodic tasks on a schedule you define. | Note that free instances are not available for these service types. Use this flowchart to help determine which service type is right for your use case: [diagram] # Deploy for Free You can deploy instances of some Render services *free of charge*: - Web services (web apps in Node.js, Python, Rails, etc.) - Render Postgres databases - Render Key Value instances *Free instances have important limitations, and you _should not_ use them for production applications.* However, they're perfect for testing out a new technology, working on a hobby project, or previewing Render's developer experience! You can also deploy [static sites](static-sites) on Render for free. > Web services and static sites count against your monthly included allotments of outbound bandwidth and pipeline minutes. View your usage in the [Render Dashboard](https://dashboard.render.com/billing#included-usage). ## Create a Free instance > For a more complete walkthrough, see [*Your First Render Deploy*](your-first-deploy). 1. [Sign up for Render](https://dashboard.render.com/register) if you haven't yet. 2. In the [Render Dashboard][dboard], click *New*: [img] 3. Select *Static Site*, *Web Service*, *Postgres*, or *Key Value*. Free options aren't available for other service types. 4. During the service creation flow, you choose an *instance type* to run your service on (unless it's a static site). Choose *Free*: [img] That's it! When you finish creating and deploying your service, it runs on a Free instance. For details on limitations of Free instance types, see the sections below. ## Free web services > [Learn more about web services on Render.](web-services) Free web services support many (but not all) features available to web services on paid instance types. Supported features include: - [Custom domains](custom-domains) - [Managed TLS certificates](tls) - [Service previews](service-previews) - [Log streams](log-streams) - [Rollbacks](rollbacks) (only to the two most recent previous deploys) *The limitations below are specific to web services on the Free instance type.* To avoid these limitations, you can create a web service on any paid instance type. ### Spinning down on idle Render *spins down* a Free web service that goes 15 minutes without receiving inbound traffic. Render spins the service back _up_ whenever it next receives a request to process. Spinning up a service takes up to a minute, which causes a noticeable delay for incoming requests until the service is back up and running. For example, a browser page load will hang temporarily. ### Monthly usage limits #### Free instance hours Render grants *750 Free instance hours* to each workspace per calendar month: - A Free web service consumes these hours as long as it's running ([spun-down services](#spinning-down-on-idle) don't consume Free instance hours). - If you consume all of your Free instance hours during a given month, Render *suspends* all of your Free web services until the start of the next month. - At the start of each month, your Free instance hours reset to 750 (remaining hours don't roll over). #### Bandwidth and build pipeline Free web services count against your monthly included allotments of [outbound bandwidth](outbound-bandwidth) and [build pipeline minutes](build-pipeline#pipeline-minutes). - *If you consume all of your outbound bandwidth during a given month,* Render bills you for a supplementary allotment. - If you haven't added a payment method, Render instead suspends all of your Free services for the remainder of the month. - *If you consume all of your build pipeline minutes during a given month,* Render bills you for a supplementary allotment (unless you've reached your [spend limit](build-pipeline#setting-a-spend-limit). - If you haven't added a payment method or you reach your spend limit, Render instead disables all new builds for your services for the remainder of the month. - In this case, your services remain active using their existing deploys. #### Tracking usage View your usage details from the *Monthly Included Usage* section of your [Billing page](https://dashboard.render.com/billing#included-usage) in the Render Dashboard: [img] Render notifies you via email when you’re approaching a usage limit, and then again if you exceed that limit. ### Service-initiated traffic threshold Render may suspend a Free web service that initiates an uncommonly high volume of traffic over the public internet. Examples of service-initiated traffic include: - Accessing an external database - Invoking external APIs - Transferring data to or from external object storage If your service is suspended this way, you can restore it by moving it to any paid instance type. ### Automatic `robots.txt` responses While a Free web service is [spun down](#spinning-down-on-idle), incoming requests to the path `/robots.txt` automatically receive a standard "disallow all" response: ```text User-agent: * Disallow: / ``` These requests to do _not_ reach your service or trigger a spin-up. While a Free web service is active, requests to `/robots.txt` are routed to it as normal. ### Other limitations - Render might restart a Free web service at any time. - Free web services don't support the following features of paid instance types: - [Scaling](scaling) beyond a single instance - [Persistent disks](disks) - [Edge caching](web-service-caching) - Running [one-off jobs](one-off-jobs) - [Shell access](ssh) via SSH or the Render Dashboard - Free web services can't _receive_ [private network](private-network) traffic. - They can _send_ private network requests to your data stores and paid services in the same region. - Free web services can't listen on reserved ports `18012`, `18013`, or `19099`. - Free web services can't send outbound network traffic on ports `25`, `465`, or `587`, commonly used for SMTP. ## Free Postgres > [Learn more about Render Postgres.](postgresql) *The limitations below are specific to Render Postgres databases on the Free instance type.* To avoid these limitations, you can upgrade your database to any paid instance type. ### Single-instance limit Only _one_ Free Render Postgres database can be active for any given workspace. ### 1 GB limit Free Render Postgres databases have a fixed storage capacity of 1 GB. ### 30-day limit *Free Render Postgres databases expire 30 days after creation.* An expired Free database is inaccessible unless you upgrade it to a paid instance type. After a Free database expires, you have a grace period of 14 days to upgrade it to a paid instance type. After the grace period, Render *deletes* the database (along with all of its data). Render notifies you via email when you’re approaching a Free database expiration, and then again when you're approaching the end of the grace period. ### Other limitations - Render might perform maintenance on a Free Render Postgres database at any time. Your database is temporarily unavailable during maintenance. - Render might restart a Free Render Postgres database at any time. - Free Render Postgres databases don't support any form of [backups](postgresql-backups). ## Free Key Value > [Learn more about Render Key Value.](key-value) *The limitations below are specific to Render Key Value instances on the Free instance type.* To avoid these limits, you can create a Render Key Value instance on any paid instance type. ### Single-instance limit Only _one_ Free Key Value instance can be active for any given workspace. ### Ephemeral storage Free Key Value instances are _not_ backed by a persistent disk. Whenever an instance restarts, all of its data is lost. ### Other limitations - Render might perform maintenance on a Free Render Key Value instance at any time. Your instance is temporarily unavailable during maintenance. - Render might restart a Free Render Key Value instance at any time (thereby deleting its data). - If you upgrade a Free Render Key Value instance to a paid instance type, all of its data is lost. ## Static sites > [Learn more about static sites on Render.](static-sites) Static sites are free to deploy on Render. As with web services, they count against your monthly included allotments of outbound bandwidth and pipeline minutes. View your usage in the [Render Dashboard](https://dashboard.render.com/billing#included-usage). # Professional Features With a *Professional* plan or higher, you can invite [workspace members](team-members) to collaborate on your Render apps and infrastructure. You also gain access to powerful operational features, such as service autoscaling and environment isolation. You can upgrade any *Hobby* workspace to a *Professional* workspace from its *Billing* page in the [Render Dashboard][dboard]. > For a full comparison of plan types, see the [pricing page](pricing). ## Feature categories ### Service ops *Professional* workspaces and higher gain access to: | Feature | Description | |--------|--------| | *[Autoscaling](scaling#autoscaling)* | Automatically scale services up and down according to their memory and CPU load. | | *[Preview environments](preview-environments)* | Spin up an ephemeral copy of your entire production environment for safe and comprehensive integration testing. | | *[Performance build pipeline](build-pipeline#pipeline-tiers)* | Run builds and other pre-deploy tasks with significantly more memory and CPU. | ### Networking *Professional* workspaces and higher gain access to: | Feature | Description | |--------|--------| | *[Network-isolated environments](projects#blocking-cross-environment-traffic)* | Block private network traffic from crossing the boundary of individual project environments. | | *[Private links](private-link)* | Securely connect your infrastructure to non-Render providers hosted on AWS. | *Enterprise orgs* also gain access to: | Feature | Description | |--------|--------| | *Expanded [inbound IP rules](inbound-ip-rules)* | Configure which IP addresses can connect to your web services and static sites over the public internet. | ### Observability *Professional* workspaces and higher gain access to: | Feature | Description | |--------|--------| | *[HTTP request logs](logging#http-request-logs)* | Automatically log details for every HTTP request to your web services from the public internet. | | *[Response latency metrics](service-metrics#response-latency)* | Track your web service's response times with common helpful percentiles (p50, p75, p90, and p99). | | *[Metrics streaming](metrics-streams)* | Push service metrics to your OpenTelemetery-compatible observability provider. | | *[Log stream overrides](log-streams#overriding-defaults)* | - With a *Professional* plan, you can disable log streaming for any individual service. - With an *Organization* or *Enterprise* plan, you can also forward any individual service's logs to a different destination. | | *[Webhooks](webhooks)* | - With a *Professional* plan, send webhook event notifications to one destination. - With an *Organization* or *Enterprise* plan, send different sets of notifications to up to 100 destinations. | ### Compliance *Organization* workspaces and higher gain access to: | Feature | Description | |--------|--------| | *[Audit logs](audit-logs)* | View and export a history of material actions performed by workspace members. | | *Additional [member roles](team-members#member-roles)* | - With an *Organization* plan, assign the *Contributor* role to technical contributors who don't need access to sensitive fields (such as connection strings and environment variables). - With an *Enterprise* plan, also gain access to the *Viewer* and *Billing* roles. | | *[HIPAA-enabled workspaces](hipaa-compliance)* | Run HIPAA-compliant applications and store protected health information on access-restricted hosts. | | *[Compliance documentation](certifications-compliance)* | View Render's SOC 2 Type 2 report, ISO 27001 certificate, and internal security policy. Viewing these documents also requires signing an NDA. | ### Increased limits and retention *Professional* workspaces and higher receive: - Unlimited [projects](projects) and environments - *Hobby* workspaces can create up to one project with up to two environments. - Unlimited [custom domains](custom-domains) - *Hobby* workspaces can add up to two total custom domains. - Increased monthly included amounts of [pipeline minutes](build-pipeline#pipeline-minutes) and [bandwidth usage](outbound-bandwidth) - Increased retention of past builds for [rollbacks](rollbacks) - Increased retention of historical [service metrics](service-metrics) and [logs](logging) # Using Render with LLM-Powered Tools Render supports a variety of capabilities to help you manage your infrastructure, diagnose issues, and understand the platform with the help of LLMs. ## Render MCP server Connect to Render's official MCP server to manage your Render infrastructure directly from apps like Cursor and Claude Code: [video] The MCP server provides tools for actions such as: - Spinning up new services - Querying databases - Analyzing metrics and logs It's especially useful for helping you identify and resolve issues with service deploys. ## Jules integration [Jules by Google Labs](https://jules.google/?utm_source=render) is an autonomous coding agent that provides a managed integration with Render. Whenever you open a pull request in your service's repo, Jules can detect failures in your service's preview build and automatically push fixes to address them. ### Prerequisites - Your Render service's repo must be hosted on GitHub. - Jules must have access to your service's repo. - [Pull request previews](service-previews#pull-request-previews-git-backed) must be enabled for your service. - These are the preview builds that Jules uses to detect and address issues. ### Setup 1. Go to [dashboard.render.com/jules](https://dashboard.render.com/jules). This opens the API Keys section of your user settings with a Jules-specific section: [img] 2. Next to the *Jules by Google Labs* section, click *+ Create API Key*. A creation dialog appears. 3. Review and accept the terms for Render's Jules integration, then click *Create API Key*. 4. Copy the created API key to your clipboard. 5. Open your [Jules integrations page](https://jules.google.com/settings/integrations?utm_source=render): [img] 6. Under the *Render* integration, paste the API key you copied and submit it. You're all set! Whenever a pull request preview fails for your repo, Jules will automatically analyze its logs to identify the root cause and push a fix to address it. You can disconnect the integration at any time by deleting the API key from the Jules integrations page. ## Documentation features The Render documentation provides the following capabilities to improve content discoverability and parsing for LLMs: ### Articles as markdown Each article under `render.com/docs/` is available in a simplified markdown format that's well suited for LLMs. Obtain an article's markdown version by doing any of the following: - Append `.md` to the end of an article's URL: ``` https://render.com/docs/llm-support.md ``` - Include an `Accept: text/markdown` header in your request to an article's URL (no `.md` extension required). - Agentic tools like Claude Code often include this header in their HTTP requests by default. - Click **Copy page** in the top-right corner of an article to copy its markdown version to your clipboard (not available on smaller screen widths). ### Reading `llms.txt` and `llms-full.txt` The Render documentation includes `llms.txt` and `llms-full.txt` files at the following URLs: ``` https://render.com/docs/llms.txt https://render.com/docs/llms-full.txt ``` | File | Description | |--------|--------| | [`llms.txt`](llms.txt) | Provides a summary of the Render documentation, including links and descriptions of each article. | | [`llms-full.txt`](llms-full.txt) | Combines most of the Render documentation into a single, simplified markdown file. Some content types are omitted for brevity. | ### Docs via MCP > **This feature is experimental.** > > Support for this documentation-specific MCP server might be discontinued at any time in the future. Render's primary [MCP server](#render-mcp-server) does not yet provide tools for querying the Render documentation. To query the Render docs from LLMs, you can connect your app or agent to the following additional MCP server: ``` https://mcp.inkeep.com/render/mcp ``` This MCP server provides "tools" for searching and asking questions about the Render docs. It uses the same LLM-powered answer engine as the *Ask AI* assistant in the documentation. # Render FAQ This page lists answers to questions that many folks have as they're getting up and running with Render. ## Languages and technologies ### Which languages does Render support? Render [natively supports](language-support) *Node.js* / *Bun*, *Python*, *Ruby*, *Go*, *Rust*, and *Elixir*. You can deploy an app in virtually _any_ language (.NET, Java, PHP, etc.) via a [Docker image](docker). ### Which datastores does Render support? Render offers managed [Postgres databases](postgresql), along with [Key Value instances](key-value) that are compatible with virtually all Redis clients. You can also run your own custom database instance (MariaDB, MongoDB, etc.) backed by a [persistent disk](disks). ## Billing ### What can I do on Render for free? See [Deploy for Free](free). ### What does Render bill for? Render bills each workspace monthly for the usage listed below. For details on all of these, see the [pricing page](pricing). | Billable | Description | |--------|--------| | Compute costs for paid service instances | Prorated by the second. If a service is active for ten seconds in a given month, you are billed only for those ten seconds. You are billed for each of the following: - Each [paid service instance](pricing#compute), including Render Postgres and Key Value instances - For [scaled](scaling) services, you are billed for each running instance. - Replica Render Postgres databases created for [high availability](postgresql-high-availability) or [read replicas](postgresql-read-replicas) - Each paid instance running as part of a [service preview](service-previews) or [preview environment](preview-environments) | | Build pipeline minutes | *Includes a monthly included amount.* Render's [build pipeline](build-pipeline) is responsible for building your project before it's deployed. This includes running each service's [build and pre-deploy commands](deploys#deploy-steps). Each workspace receives a monthly included amount of pipeline minutes (see the [pricing page](pricing)), which are consumed while running these commands. Render also offers a [performance pipeline tier](build-pipeline#pipeline-tiers) for teams with builds that require additional memory and/or CPU. Performance pipeline minutes carry an additional charge and do _not_ provide a monthly included amount. You can set a monthly [spend limit](build-pipeline#setting-a-spend-limit) for pipeline minutes. If you reach this limit in a given month, _Render stops running new builds for your services until the next billing period_. | | Outbound bandwidth | > *Render made changes to outbound bandwidth on August 1, 2025.* For details, see [Outbound Bandwidth](outbound-bandwidth). *Includes a monthly included amount.* Outbound bandwidth includes all network traffic sent by your services to destinations outside of Render. This data includes web pages, API payloads, and so on. Each workspace receives a monthly included amount of outbound bandwidth shared across all services (see the [pricing page](pricing)). If you exceed this, Render bills you for a supplementary amount. _Inbound_ bandwidth (traffic _to_ your services) is free. As part of Render's [DDoS protection](ddos-protection), Render does _not_ bill for bandwidth usage incurred from a DDoS attack. | | Team members | [*Professional* workspaces](professional-features) and higher are billed per member per month, depending on plan type. For details, see the [pricing page](pricing). These workspaces gain access to features like [autoscaling](scaling#autoscaling), [preview environments](preview-environments), and the [Performance build pipeline](build-pipeline#pipeline-tiers). | ### All of my services run on free instances. Can I still be billed? *Yes, if you've added a payment method.* If you exceed your monthly included amount of outbound bandwidth or build pipeline minutes, Render bills you for a supplementary amount. You can set a monthly [spend limit](build-pipeline#setting-a-spend-limit) for pipeline minutes. > If you haven't added a payment method and you would incur charges, Render instead disables your services for the duration of the current billing period. ## Service behavior ### My app runs fine locally. Why does it fail to deploy? Please see [Troubleshooting Your Deploy](troubleshooting-deploys). ### Why is my free service sometimes slow to respond? Free web service instances [spin down](free#spinning-down-on-idle) if they receive no incoming traffic for 15 consecutive minutes. These services take up to a minute to spin back up when they next receive a request. Paid instance types do _not_ spin down. Learn more about [free instance limitations](free), including for Render Postgres and Key Value. ### Why do files saved to my service's filesystem disappear? By default, Render services have an *ephemeral filesystem*, which means that any changes you make to local files are *lost* every time a service redeploys or restarts. For long-term data storage on Render, we recommend one of the following: - For storage of relational data, create a [Render Postgres database](postgresql). - For storage of key-value data, create a [Render Key Value instance](key-value). - For storage of arbitrary files, attach a [persistent disk](disks). - You can also use a persistent disk to run a custom database instance instead of Render Postgres, such as [MySQL](deploy-mysql). ### Can I deploy multiple apps to a single Render service? *It might be possible, but you shouldn't.* Run each of your applications in a separate service to ensure proper resource isolation for security and performance. Let's say you want to deploy an architecture consisting of a frontend site, a backend API, and a datastore. We recommend deploying these as follows: | App | Service type(s) | | ------------- | ----------------------------------------------------------------------------------------- | | Frontend site | [Static site](static-sites) or [web service](web-services) | | Backend API | [Web service](web-services) or [private service](private-services) | | Datastore | [Render Postgres](postgresql) or a custom database backed by a [persistent disk](disks) | To help identify which service types are right for your use case, see [this flow chart](service-types#which-service-type-is-right-for-my-app). ## Account administration ### Can I transfer existing services from one workspace to another? No, it is not currently possible to transfer existing services between workspaces. Instead, you can: - [Invite team members](team-members) to collaborate on services in your current workspace - Recreate your services in the workspace you want to move them to ## Render support ### Which types of issues can Render's support team help with? Please see [When to contact support](troubleshooting-deploys#when-to-contact-support). # Render Service Types Render supports five different *service types* for hosting your app: - [Web services](web-services) (most common) - [Static sites](static-sites) - [Private services](private-services) - [Background workers](background-workers) - [Cron jobs](cronjobs) You can also create *fully managed datastores* to use with your app: - [Render Postgres databases](postgresql) - [Render Key Value instances](key-value) - These instances are compatible with virtually all Redis®\* clients. Choosing a service type is the first step of creating a new service in the [Render Dashboard][dboard]: [img] ## Which service type is right for my app? [diagram] See below for a summary of each service type, along with links to full documentation. ## Summary of service types ### For running code | Service Type | Description | |--------|--------| | [*Web service*](web-services) | *The most common service type.* Dynamic web apps with a public `onrender.com` subdomain for receiving traffic over HTTP. If you're building a public web app using Express, FastAPI, Rails, or something similar, use this service type. To get started, you can create a [free instance](free#free-web-services). | | [*Static site*](static-sites) | Websites that consist entirely of statically served assets (commonly HTML, CSS, and JS). Static sites have a public `onrender.com` subdomain and are served over a global CDN. Use static sites to deploy frontends created with frameworks such as: - [Vue.js](deploy-vue-js) - [Hugo](deploy-hugo) - [Svelte](deploy-svelte) - [Jekyll](deploy-jekyll) | | [*Private service*](private-services) | Dynamic web apps that _don't_ have a public URL. Private services do expose an _internal_ hostname for receiving traffic from your other Render services over their shared [private network](private-network). Private services are great for deploying tools like: - [Elasticsearch](deploy-elasticsearch) - [ClickHouse](deploy-clickhouse) | | [*Background worker*](background-workers) | Internal apps that run continuously, often to process jobs that are added to a job queue. Background workers do _not_ expose a URL or internal hostname, but they can send outbound requests to other service types. Use background workers with a framework like: - [Sidekiq](deploy-sidekiq-worker) - [Celery](deploy-celery) | | [*Cron job*](cronjobs) | Internal apps that run—and then exit—on a defined schedule. A cron job might run a single bash command, a script with multiple commands, or a compiled executable. Cron jobs do _not_ expose a URL or internal hostname, but they can send outbound requests to other service types. | ### For storing data > In addition to the managed datastores below, Render supports attaching a [persistent disk](disks) to most other service types. | Service Type | Description | |--------|--------| | [*Render Postgres*](postgresql) | A powerful, open-source relational database. To get started, you can create a [free instance](free#free-postgres) that expires after 30 days. Render continually backs up all paid Render Postgres instances to provide [point-in-time recovery](postgresql-backups). Larger instances support additional reliability features like [read replicas](postgresql-read-replicas) and [high availability](postgresql-high-availability). | | [*Render Key Value*](key-value) | An in-memory key-value store that's ideal for use as a job queue or a shared cache. To get started, you can create a [free instance](free#free-key-value). Render Key Value is compatible with virtually all Redis clients. Paid Key Value instances continuously write to disk to persist data across restarts. | # Static Sites You can deploy static websites (React, Next.js, etc.) to Render in just a few clicks. Serve your application frontends, blogs, and documentation sets over a global CDN, minimizing load times for your users around the world. *Static sites are fast and free to deploy.* After you link your site's Git repo, Render automatically updates your site with every push to your specified branch. Each site receives a unique `onrender.com` URL, and you can add your own [custom domains](custom-domains). > *Deploying a _dynamic_ site, such as a Rails server?* Instead create a [web service](web-services). Static sites count against your workspace's monthly included amounts of [outbound bandwidth](outbound-bandwidth) and [pipeline minutes](build-pipeline). You can track your usage in the [Render Dashboard](https://dashboard.render.com/billing#included-usage). ## Get started In the [Render Dashboard](https://dashboard.render.com/), click *New > Static Site*: [img] Connect your repo, specify your build details (including which Git branch to deploy), and click *Create Static Site*. You're all set! Render kicks off your site's initial deploy. For extra help with popular static site generators, we have quickstarts for: - [Next.js](deploy-nextjs-app#deploy-as-a-static-site) - [Vue.js](deploy-vue-js) - [Hugo](deploy-hugo) - [Docusaurus](deploy-docusaurus) - [Svelte](deploy-svelte) - [Jekyll](deploy-jekyll) - [Gatsby](deploy-gatsby) - [Create React App](deploy-create-react-app) ## Features ### Global CDN Render serves your site over a blazing-fast, reliable, and secure global CDN. We cache your content on network edges around the world, ensuring the fastest possible load times for your users. ### Pull request previews With each pull request to your site's deployed branch, Render can automatically generate a preview instance of the site with its own URL. This helps you quickly test out updates before merging. [Learn more about PR previews.](service-previews) ### Redirects and rewrites Define [redirect and rewrite rules](redirects-rewrites) for your site's paths directly from the Render Dashboard—no code required. Additionally, Render automatically redirects HTTP traffic to HTTPS. ### Custom response headers Add [custom HTTP headers](static-site-headers) to your site's responses for security and performance. ### Immediate cache invalidation Render insulates your site against failure with [zero-downtime deploys](deploys#zero-downtime-deploys). We build your site with every push to your deployed branch, and each build is fully atomic. As soon as a build succeeds, we deploy it and _immediately_ invalidate our CDN caches so your users always see the latest working version of your site. ### DDoS protection Render provides free denial-of-service protection to all static sites and web services. [Learn more.](ddos-protection) ### Brotli compression Render serves your content with [Brotli compression](https://en.wikipedia.org/wiki/brotli), which is [better than gzip](https://blogs.akamai.com/2016/02/understanding-brotlis-potential.html) and makes your sites faster by reducing page sizes. ### HTTP/2 All Render sites and web services support [HTTP/2](https://http2.github.io/) by default, which means fewer client connections to your site and faster page loads. ### Managed TLS certificates Render uses Let's Encrypt and Google Trust Services to automatically issue and renew TLS certificates for every site and service. There is no additional setup, and TLS certificates are always included for free. ### Custom domains > *Hobby workspaces support a maximum of two custom domains across all services.* > > Professional workspaces and higher support unlimited custom domains. Add [custom domains](custom-domains) to your static site for free. Specify the domain on your site's Settings page in the [Render Dashboard][dboard], then follow the instructions to update DNS with your provider: - [Cloudflare](configure-cloudflare-dns) - [Namecheap](configure-namecheap-dns) - [Other](configure-other-dns) ### Dependency installation By default, Render automatically attempts to detect and install your static site's dependencies. If you prefer to install dependencies manually, add a `SKIP_INSTALL_DEPS` [environment variable](configure-environment-variables) to your site and set it to `true`. You can then include your own dependency installation as part of your site's build command. # Web Services Render helps you host web apps written in your favorite [language](language-support) and framework: Node.js with Express, Python with Django or FastAPI—you name it. Render builds and deploys your code with every push to your linked Git branch. You can also deploy a [prebuilt Docker image](#deploy-from-a-container-registry). Every Render web service gets a unique `onrender.com` subdomain, and you can add your own [custom domains](custom-domains). Web services can also communicate with your _other_ Render services over your [private network](private-network). > *Your web service must [*bind to a port*](#port-binding)* on host `0.0.0.0` to receive HTTP requests from the public internet. The default expected port is `10000` (you can [configure this](#port-binding)). > > If you _don't_ want your app to be reachable via the public internet, create a [private service](private-services) instead of a web service. ## Deploy a template You can get started on Render by deploying one of our basic app templates: - [Express](deploy-node-express-app) (Node.js) - [Django](deploy-django) (Python) - [Ruby on Rails](deploy-rails-8) - [Gin](deploy-go-gin) (Go) - [Rocket](deploy-rocket-rust) (Rust) - [Phoenix](deploy-phoenix) (Elixir) - [Laravel](deploy-php-laravel-docker) (PHP) > *Don't see your framework?* [Browse more quickstarts.](#quickstarts) ## Deploy your own code You can build and deploy your web service using the code in your [GitHub](github)/[GitLab](gitlab)/[Bitbucket](bitbucket) repo, or you can [pull a prebuilt Docker image](#deploy-from-a-container-registry) from a container registry. ### Deploy from GitHub / GitLab / Bitbucket 1. [Sign up for Render](https://dashboard.render.com/register) if you haven't yet. 2. In the [Render Dashboard][dboard], click *New > Web Service*: [img] 3. Choose *Build and deploy from a Git repository* and click *Next*. 4. Choose one of your GitHub/GitLab/Bitbucket repositories to deploy from and click *Connect*. - You'll first need to link your [GitHub](github)/[GitLab](gitlab)/[Bitbucket](bitbucket) account to Render if you haven't yet. - You can use any public repo, or any private repo that your account has access to. 5. In the service creation form, provide the following details: | Field | Description | |--------|--------| | *Name* | A name to identify your service in the Render Dashboard. Your service's `onrender.com` subdomain also incorporates this name. | | *Region* | The [geographic region](regions) where your service will run. Your services in the same region can communicate over their shared [private network](private-network). | | *Branch* | The branch of your linked Git repo to use to build your service. Render can automatically redeploy your service whenever you push changes to this branch. | | *Language* | Your app's programming language. The service deploys to a runtime that includes the chosen language's build tools and dependencies. Render natively supports [these languages](language-support) and also provides a Docker runtime for building and running a custom image from a `Dockerfile`. | | *Build Command* | The command for Render to run to build your service from source. Common examples include `npm install` for Node.js and `pip install -r requirements.txt` for Python. | | *Start Command* | The command for Render to run to start your built service. Common examples include `npm start` for Node.js and `gunicorn your_application.wsgi` for Python. | 6. Still in the service creation form, choose an *instance type* to run your service on: [img] If you choose the Free instance type, note its [limitations](free#free-web-services). 7. Under the *Advanced* section, you can set environment variables and secrets, add a [persistent disk](disks), set a [health check path](deploys#health-checks), and more. 8. Click *Create Web Service*. Render kicks off your service's first build and deploy. - You can view the deploy's progress from your service's *Events* page in the [Render Dashboard][dboard]. > *Did your first deploy fail?* [See common solutions.](troubleshooting-deploys) ### Deploy from a container registry 1. [Sign up for Render](https://dashboard.render.com/register) if you haven't yet. 2. In the [Render Dashboard][dboard], click *New > Web Service*: [img] 3. Choose *Deploy an existing image from a registry* and click *Next*. 4. Enter the path to your image (e.g., `docker.io/library/nginx:latest`) and click *Next*. 5. In the service creation form, provide the following details: | Field | Description | |--------|--------| | *Name* | A name to identify your service in the Render Dashboard. Render also uses this name when generating your service's `onrender.com` subdomain. | | *Region* | The [geographic region](regions) where your service will run. Your services in the same region can communicate over their shared [private network](private-network). | 6. Still in the service creation form, choose an *instance type* to run your service on: [img] If you choose the Free instance type, note its [limitations](free#free-web-services). 7. Under the *Advanced* section, you can set environment variables and secrets, add a [persistent disk](disks), set a [health check path](deploys#health-checks), and more. 8. Click *Create Web Service*. Render pulls your specified Docker image and kicks off its initial deploy. - You can view the deploy's progress from your service's *Events* page in the [Render Dashboard][dboard]. > *Did your first deploy fail?* [See common solutions.](troubleshooting-deploys) ## Port binding *Every Render web service must bind to a port on host `0.0.0.0` to serve HTTP requests.* Render forwards inbound requests to your web service at this port (it is not _directly_ reachable via the public internet). We recommend binding your HTTP server to the port defined by the `PORT` environment variable. Here's a basic Express example: ```js:app.js const express = require('express') const app = express() const port = process.env.PORT || 4000 //highlight-line app.get('/', (req, res) => { res.send('Hello World!') }) app.listen(port, () => { console.log(`Example app listening on port ${port}`) }) ``` _Adapted ever-so-slightly from [here](https://expressjs.com/en/starter/hello-world.html)_ **The default value of `PORT` is `10000` for all Render web services.** You can override this value by [setting the environment variable](configure-environment-variables) for your service in the [Render Dashboard][dboard]. > **If you bind your HTTP server to a different port, Render is _usually_ able to detect and use it.** > > If Render fails to detect a bound port, your web service's deploy fails and displays an error in your [logs](logging). The following ports are reserved by Render and cannot be used: - `18012` - `18013` - `19099` ### Binding to multiple ports Render forwards inbound traffic to only _one_ HTTP port per web service. However, your web service _can_ bind to additional ports to receive traffic over your [private network](private-network). If your service does bind to multiple ports, always bind your public HTTP server to the value of the `PORT` environment variable. ## Connect to your web service ### Connecting from the public internet Your web service is reachable via the public internet at its `onrender.com` subdomain (along with any [custom domains](custom-domains) you add). > If you don't want your service to be reachable via the public internet, create a [private service](private-services) instead of a web service. Render's load balancer terminates SSL for inbound HTTPS requests, then forwards those requests to your web service over HTTP. If an inbound request uses HTTP, Render first redirects it to HTTPS and _then_ terminates SSL for it. ### Connecting from other Render services See [Private Network Communication](private-network). ## Additional features Render web services also support the following capabilities: - [Zero-downtime deploys](deploys#zero-downtime-deploys) - Free, fully-managed [TLS certificates](tls) - [Custom domains](custom-domains) (including wildcards) - Manual or automatic [scaling](scaling) - [Persistent disks](disks) - [Edge caching](web-service-caching) for static assets - [WebSocket connections](websocket) - [Service previews](service-previews) - [Instant rollbacks](rollbacks) - [Maintenance mode](maintenance-mode) - HTTP/2 - [DDoS protection](ddos-protection) - Brotli compression - Support for [Blueprints](infrastructure-as-code), Render's approach to Infrastructure-as-Code # Private Services Render private services are just like [web services](web-services), with one exception: *private services aren't reachable via the public internet.* They _do not_ receive an `onrender.com` subdomain: [diagram] However, private services _are_ reachable by your other Render services on the same [private network](private-network)! This means they're perfect for services that only your own infrastructure needs to talk to. Private services can listen on _almost_ any port ([see details](private-network#port-restrictions)) and communicate using any protocol. > *Private services must bind to at least one port.* > > If your service won't receive incoming traffic, instead create a [background worker](background-workers). See details [below](#private-service-or-background-worker). ## Examples Here are some deployment guides for tools that make great private services: - [Deploy an Elasticsearch server](deploy-elasticsearch) - [Deploy Clickhouse](deploy-clickhouse) ## Private service or background worker? Like private services, your [background workers](background-workers) are unreachable via the public internet. _Unlike_ private services, *background workers aren't even reachable via their [private network](private-network):* [diagram] - If your internal service will bind to _at least one port_ and receive private network traffic, create a private service. - Otherwise, create a background worker. Background workers can _send_ private network requests to other services but can't _receive_ them. They usually perform long-running or resource-intensive tasks, which they fetch from a job queue that's often backed by a [Render Key Value instance](key-value). ## Connect to your private service See [Private Network Communication](private-network#how-to-connect). # Background Workers > *Looking to run high-volume distributed background tasks?* > > Currently in early access, [Render Workflows](workflows) provide an all-in-one worker model with managed queuing, automatic retries, and rapid spin-up. *Background workers* are services that run continuously (like a [web service](web-services) or a [private service](private-services)), but they don't receive any incoming network traffic. Instead, these services usually poll a task queue (such as one backed by a [Render Key Value](key-value) instance) and process new tasks as they come in: [img] Background workers help to keep your apps responsive by offloading long-running, asynchronous tasks from your services in the critical request path. Common worker tasks include: - Processing media files - Generating reports - Interacting with third-party APIs, such as Stripe, Twilio, or AI models ## Popular worker frameworks You can use the frameworks below to simplify polling a task queue backed by a Redis®-like store (such as [Render Key Value](key-value)). | Language | Framework | |--------|--------| | *Python* | Celery ([see quickstart](deploy-celery)) | | *Ruby* | Sidekiq ([see quickstart](deploy-rails-sidekiq)) | | *Node.js* | [BullMQ](https://bullmq.io/) | | *Go* | [Asynq](https://github.com/hibiken/asynq) | | *Elixir* | [Oban](https://hexdocs.pm/oban/Oban.html) *Note:* This framework integrates with Render Postgres instead of Key Value. | | *Rust* | [apalis](https://github.com/geofmureithi/apalis) | # Cron Jobs You can create [cron jobs](https://en.wikipedia.org/wiki/Cron) on Render that run periodically on a schedule you define. You create cron jobs in the [Render Dashboard][dboard], just like you create any other service type: [img] Your cron job can use any of your [GitHub](github)/[GitLab](gitlab)/[Bitbucket](bitbucket) repos, or it can pull a [prebuilt Docker image](deploying-an-image) from an external registry. - *If you connect a Git repo,* Render builds a new version of your code whenever you push changes to your connected branch. The new build does not affect in-progress runs (only future runs). - *If you pull a Docker image,* Render pulls that image before _each run_ of your cron job. Render does not retain pulled images between runs. > Cron jobs can't provision or access a [persistent disk](disks#disk-limitations-and-considerations). ## Setup The cron job setup flow is similar to that of any other Render service. However, the following fields are specific to cron jobs: [img] | Field | Description | |--------|--------| | *Schedule* | The schedule to use for the cron job, defined as a [cron expression](https://en.wikipedia.org/wiki/Cron#CRON_expression). Here are some examples: - *Every ten minutes:* `*/10 * * * *` - *Once every day at noon UTC:* `0 12 * * *` - *Once every 60 minutes, Monday through Friday UTC:* `*/60 * * * MON-FRI` Note that all day and time ranges use UTC. | | *Command* | The command to execute with each run. This can be: - Any valid Linux command, such as `echo "Hello!"` - An executable [bash script](https://tldp.org/LDP/Bash-Beginners-Guide/html/sect_02_01.html) that contains the command(s) to run *Make sure your command exits when the cron job finishes!* Cron jobs are billed according to how long they run. | ### Environment variables Like any other Render service, cron jobs can set [environment variables](configure-environment-variables) for values like database URLs and API keys. You can also share environment variables across multiple services with an [environment group](configure-environment-variables#environment-groups). ## Manually triggering a run To run your cron job at an unscheduled time (such as for debugging purposes), go to its page in the [Render Dashboard][dboard] and click *Trigger Run*. > If you manually trigger a cron job run while _another_ run is active, Render first _cancels_ the active run. For details, see [Single-run guarantee](#single-run-guarantee). ## Single-run guarantee *Render guarantees that at most one run of a given cron job is active at a given time.* This protects against issues that can arise with parallel execution. ###### What if I manually trigger a run while another run is active? Render immediately cancels the active run, then starts the manually triggered run. ###### What if a run is currently active at the time of the next scheduled run? Render _delays_ the next scheduled run until the active run finishes. ###### What if my run never finishes or takes a very long time? Render stops an active run after 12 hours. To perform tasks that run longer than this (or continuously), instead create a [background worker](background-workers). ## Instance types and billing Cron jobs can use whichever [instance type](pricing#cron-jobs) best suits their CPU and memory requirements. Billing is prorated by the second, based on active running time during a given month. There is a minimum monthly charge of $1 per cron job service. # Multi-Service Architectures on Render > *This guide teaches you how to:* > > - Combine different Render service types into a common web app architecture > - Set up connections between services using environment variables > - Communicate between services over a private network Modern cloud applications usually consist of multiple connected services: [diagram] A *multi-service architecture* like this one enables you to deploy, scale, and even swap out individual parts of your app—all with minimal impact on the rest of your system. The Render platform is designed from the ground up to support multi-service architectures. You can assemble different [service types](service-types) into any combination you need, using any set of languages and frameworks. Let's look at an example. ## Example scenario A common multi-service web app might consist of: - A React or Next.js website for the app's frontend - An Express or Django API server to handle requests from clients - A relational database for long-term storage of application data We can represent each of these components as a separate service on Render: | Component | Service Type | Common Frameworks | | ------------ | ----------------------------------------------------------------------------------------------------------- | ------------------------ | | *Frontend* | [Static site](static-sites) (or a [web service](web-services) if the frontend includes server-side logic) | React, Next.js, Vue.js | | *Backend* | [Web service](web-services) | Django, Express, FastAPI | | *Database* | [Render Postgres database](postgresql) | | Let's walk through deploying an app with these components. You can apply these steps to your own app, regardless of which frameworks you use. ## Prerequisites Before we start deploying, confirm all of the following: 1. You've created your [Render account](https://dashboard.render.com/register). 2. Each project you want to deploy is one of the following: - A repository hosted on GitHub, GitLab, or Bitbucket - A Docker image in a [supported registry](deploying-an-image), such as Docker Hub 3. Your full application works as expected on your local machine. 4. You've consulted your chosen framework's documentation for specific deployment guidance. - For example, here's the [deployment guide for Next.js](https://nextjs.org/docs/pages/building-your-application/deploying#self-hosting). ## Steps to deploy In the steps below, we'll first deploy each component of our architecture. Then, we'll connect them by setting environment variables. ### 1. Create a database Render provides fully managed PostgreSQL databases with [point-in-time recovery](postgresql-backups), along with reliability features like [read replicas](postgresql-read-replicas) and [high availability](postgresql-high-availability) for larger instances. We can create a Render Postgres database with a few clicks in the [Render Dashboard][dboard]: [img] Follow [these steps](postgresql-creating-connecting#create-your-database), then return here. ### 2. Deploy the backend Our application's backend will handle incoming HTTP requests from browsers and other clients. To support this, we'll create a *web service*. Web services receive a public `onrender.com` URL, and you can add your own [custom domains](custom-domains). You can use virtually any web framework for your web service (Django, Express, FastAPI, and so on). > To deploy a backend that only receives traffic from your own Render infrastructure, create a [private service](private-services) instead. 1. Make sure that on startup, your backend code binds an HTTP server to a port on host `0.0.0.0`. - We recommend binding to the value of the `PORT` environment variable (default `10000`). - If you're building from a Dockerfile, indicate your HTTP port in the file like so: ```dockerfile EXPOSE 10000 ``` Learn more about the [`EXPOSE` instruction](https://docs.docker.com/reference/dockerfile/#expose). 2. Create a new web service in the same region as your database and deploy your backend code to it. - Follow [these steps](web-services#deploy-your-own-code), then return here. ### 3. Deploy the frontend Our application's frontend will serve the content that users view and interact with in their browser. Depending on our frontend's framework, we'll deploy our code as either a **static site** or a second **web service**: | Service type | When to use | Example frameworks | |--------|--------|--------| | **Static site** | For apps with entirely static content (HTML/CSS/JS) | React, Vue.js, Next.js ([static exports](https://nextjs.org/docs/pages/building-your-application/deploying/static-exports) only) | | **Web service** | For apps with server-side logic | Next.js, Nuxt.js | Render static sites are served by a globally distributed CDN, so we recommend using them for any framework that supports it. To deploy your frontend, follow the instructions for your chosen service type, then return here: - [Static site](static-sites#get-started) - [Web service](web-services#deploy-your-own-code) ### 4. Connect your services After creating and deploying our services, we need to configure them to communicate with each other. To do this, we can set [environment variables](configure-environment-variables) on a service to specify the address of each _other_ service it connects to: [diagram] Let's set up connections for our frontend and backend services. #### Update the frontend service 1. Look up the [public URL](web-services#connecting-from-the-public-internet) of your backend service. 2. [Add an environment variable](configure-environment-variables#setting-environment-variables) to your frontend service: - Give the environment variable a helpful name (such as `BACKEND_URL`) and set its value to your backend's public URL. - For static site frameworks, you might need to use a specific name for the environment variable (such as `REACT_APP_BACKEND_URL` for React). 3. Update your frontend code to use the new environment variable to connect to your backend. For example, in JavaScript: ```javascript // Use BACKEND_URL if set, otherwise default to localhost const BACKEND_URL = process.env.BACKEND_URL || 'http://localhost:4000' // Basic example of fetching data from your backend fetch(`${BACKEND_URL}/api/data`) .then((response) => response.json()) .then((data) => console.log(data)) ``` 4. Push your updated code to your linked branch to deploy your changes. #### Update the backend service 1. Look up the public URL of your frontend service. 2. Look up the [internal address](private-network#how-to-connect) of your Render Postgres database. - Backend services on Render can communicate with each other using their "internal" (or "private") addresses. When you use an internal address, traffic between the services stays on their private network—it doesn’t traverse the open internet. 3. [Add environment variables](configure-environment-variables#setting-environment-variables) to your backend service: - Define a `FRONTEND_URL` variable and set its value to the frontend's public URL. - Define a `DATABASE_URL` variable and set its value to the database's internal address. 4. Update your backend code to use the `FRONTEND_URL` and `DATABASE_URL` environment variables to connect. See examples below. - Using `FRONTEND_URL` to set CORS headers in Express middleware: ```javascript // Use FRONTEND_URL if set, otherwise default to localhost const FRONTEND_URL = process.env.FRONTEND_URL || 'http://localhost:3000' // Set CORS headers to allow requests from the frontend app.use((req, res, next) => { res.setHeader('Access-Control-Allow-Origin', FRONTEND_URL) next() }) ``` - Using `DATABASE_URL` to connect to a database with the `pg` Node.js library: ```javascript const { Pool } = require('pg') const pool = new Pool({ connectionString: process.env.DATABASE_URL, }) ``` 5. Push your updated code to your linked branch to deploy your changes. After your frontend and backend deploys complete, your app should be up and running! Visit your frontend URL to confirm. If you encounter any issues, see [Troubleshooting Deploys](troubleshooting-deploys). ## Consider infrastructure as code (IaC) As your architecture grows in scale, it becomes more and more helpful to manage your services in a unified way. [Render Blueprints](infrastructure-as-code) enable you to configure and update the entire architecture of your app with a single YAML file. **Show example Blueprint** ```yaml # This is a basic example Blueprint for a Django web service and # the Render Postgres database it connects to. services: - type: web # A Python web service named django-app running on a free instance plan: free name: django-app runtime: python repo: https://github.com/render-examples/django.git buildCommand: './build.sh' startCommand: 'python -m gunicorn mysite.asgi:application -k uvicorn.workers.UvicornWorker' envVars: - key: DATABASE_URL # Sets DATABASE_URL to the connection string of the django-app-db database fromDatabase: name: django-app-db property: connectionString databases: - name: django-app-db # A Render Postgres database named django-app-db running on a free instance plan: free ``` You can even [generate a Blueprint](infrastructure-as-code#generating-a-blueprint-from-existing-services) for your existing services, which makes it much faster to get started. # Deploying on Render Render can [automatically deploy](#automatic-deploys) your application each time you merge a change to your codebase: [img] You can also trigger [manual deploys](#manual-deploys), both programmatically and in the Render Dashboard. All service types redeploy with [zero downtime](#zero-downtime-deploys) (unless they attach a [persistent disk](disks)). ## Automatic deploys As part of creating a service on Render, you link a branch of your [GitHub](github)/[GitLab](gitlab)/[Bitbucket](bitbucket) repo (such as `main` or `production`). Whenever you push or merge a change to that branch, by default Render automatically rebuilds and redeploys your service. Auto-deploys appear in your service's *Events* timeline in the Render Dashboard: [img] If needed, you can [skip an auto-deploy](#skipping-an-auto-deploy) for a particular commit, or even [disable auto-deploys entirely](#disabling-auto-deploys). > *Services that pull and run a [*prebuilt Docker image*](deploying-an-image) do not support auto-deploys.* > > These services do not link a Git branch and must be redeployed [manually](deploys#manual-deploys). ### Configuring auto-deploys Configure a service's auto-deploy behavior from its *Settings* page in the [Render Dashboard][dboard]: [img] Under *Auto-Deploy*, select one of the following options: | Option | Description | |--------|--------| | *On Commit* | Render triggers a deploy as soon as you push or merge a change to your linked branch. This is the default behavior for a new service. | | *After CI Checks Pass* | With each change to your linked branch, Render triggers a deploy _only after_ all of your repo's CI checks pass. For details, see [Integrating with CI](#integrating-with-ci). | | *Off* | Disables auto-deploys for the service. Choose this option if you only want to trigger deploys [manually](#manual-deploys). | #### Integrating with CI If you set your service's [auto-deploy behavior](#configuring-auto-deploys) to *After CI Checks Pass*, Render waits for a new commit's CI checks to complete before triggering a deploy. If _all_ checks pass, Render proceeds with the deploy. For GitHub checks, Render considers a check "passed" if its [conclusion](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/collaborating-on-repositories-with-code-quality-features/about-status-checks#check-statuses-and-conclusions) is any of `success`, `neutral`, or `skipped`. > *Render does _not_ trigger a deploy if:* > > - Zero checks are detected for the new commit > - At least one CI check fails for the new commit > > If your repo doesn't run CI checks, set your service's auto-deploy behavior to *On Commit* instead if you want to enable auto-deploys. Select the tab for your Git provider to learn which CI checks are supported: **GitHub** Render detects the results of CI checks originating from the following: - GitHub Actions - Tools that integrate with the [GitHub checks API](https://docs.github.com/en/rest/guides/using-the-rest-api-to-interact-with-checks), such as [CircleCI](https://circleci.com/docs/enable-checks) Supported checks appear on commits and pull requests in the GitHub UI: [img] **GitLab** Render detects the results of jobs executed as part of [GitLab CI/CD pipelines](https://docs.gitlab.com/ci/pipelines/). **Bitbucket** Render detects the results of steps executed as part of [Bitbucket Pipelines](https://support.atlassian.com/bitbucket-cloud/docs/get-started-with-bitbucket-pipelines/). ### Skipping an auto-deploy Certain changes to your codebase might not require a new deploy, such as edits to a `README` file. In these cases, you can include a *skip phrase* in your Git commit message to prevent the change from triggering an auto-deploy: ```shell git commit -m "[skip render] Update README" ``` The skip phrase is one of `[skip render]` or `[render skip]`. You can also use one of the following in place of `render`: - `deploy` - `cd` When an auto-deploy is skipped, a corresponding entry appears on your service's **Events** page: [img] > **For additional control over auto-deploys, you can configure [**build filters**](monorepo-support#setting-build-filters).** > > With build filters, Render triggers an auto-deploy only if there are changes to particular files in your repo (no skip phrase required). [See details.](monorepo-support#setting-build-filters) ## Manual deploys You can manually trigger a Render service deploy in a variety of ways. Select a tab for details: **Dashboard** From your service's page in the [Render Dashboard][dboard], open the **Manual Deploy** dropdown: [img] Select a deploy option: | Option | Description | |--------|--------| | **Deploy latest commit** | Deploys the most recent commit on your service's linked branch. | | **Deploy a specific commit** | Deploys a specific commit from your linked branch's commit history. Specify a commit by its SHA, or by selecting it from a list of recent commits. **This disables automatic deploys for the service.** This is because an automatic deploy might reintroduce commits you wanted to exclude from this deploy. Learn more about [deploying a specific commit](deploying-a-commit). | | **Clear build cache & deploy** | Similar to **Deploy latest commit**, but first clears the service's build cache. This way, the new deploy doesn't reuse any artifacts generated during a previous build. Use this option to incorporate changes to your service's build command, or to refresh stale static assets. | | **Restart service** | Deploys the same commit that's _currently_ deployed for the service, with the same values for user-defined environment variables. For details, see [Restarting a service](#restarting-a-service). | **CLI** Run the following [Render CLI](cli) command: ```shell render deploys create ``` This opens an interactive menu that lists the services in your workspace. Select a service to deploy. **Deploy hook** Each Render service has a unique **Deploy Hook URL** available on its Settings page: [img] You can trigger a manual deploy by sending an HTTP GET or POST request to this URL. For details, see [Deploy Hooks](deploy-hooks). **API** Send a `POST` request to the Render API's [Trigger Deploy endpoint.](https://api-docs.render.com/reference/create-deploy) This endpoint accepts optional body parameters for clearing the service's build cache and/or deploying a specific commit. For services that pull a Docker image, you can specify the URL of the image to pull. ## Deploy steps With each deploy, Render proceeds through the following commands for your service:
[diagram] \*Consumes [pipeline minutes](build-pipeline#pipeline-minutes) while running. [View your usage.](https://dashboard.render.com/billing#included-usage) You specify these commands as part of creating your service in the [Render Dashboard][dboard]. You can modify these commands for an existing service from its **Settings** page: [img] Each command is described below. **If any command fails or times out, the entire deploy fails.** Any remaining commands do not run. Your service continues running its most recent successful deploy (if any), with [zero downtime](#zero-downtime-deploys). Command timeouts are as follows: | Command | Timeout | | ------------------------------------- | ----------- | | [**Build**](#build-command) | 120 minutes | | [**Pre-deploy**](#pre-deploy-command) | 30 minutes | | [**Start**](#start-command) | 15 minutes | ### Build command Performs all compilation and dependency installation that's necessary for your service to run. It usually resembles the command you use to build your project locally. > **This command consumes [**pipeline minutes**](build-pipeline#pipeline-minutes) while running.** > > You receive an included allotment of pipeline minutes each month and can purchase more as needed. [View your usage.](https://dashboard.render.com/billing#included-usage) #### Example build commands for each runtime | Runtime | Example Build Command(s) | |--------|--------| | Node.js | `yarn` `npm install` | | Python | `pip install -r requirements.txt` | | Ruby | `bundle install` | | Go | `go build -tags netgo -ldflags '-s -w' -o app` | | Rust | `cargo build --release` | | Elixir | `mix phx.digest` | | Docker | **You can't set a build command for services that use Docker.** Instead, Render either [builds a custom image](docker#building-from-a-dockerfile) based on your Dockerfile or [pulls a specified image](deploying-an-image) from your container registry. | ### Pre-deploy command If defined, the pre-deploy command runs _after_ your service's build finishes, but _before_ that build is deployed. Recommended for tasks that should always precede a deploy but are _not_ tied to building your code, such as: - Database migrations - Uploading assets to a CDN > **The pre-deploy command executes on a separate instance from your running service.** > > Changes you make to the filesystem are _not_ reflected in the deployed service. You do not have access to a service's attached [persistent disk](disks) (if it has one). The pre-deploy command is available for paid [web services](web-services), [private services](private-services), and [background workers](background-workers). If you _don't_ define a pre-deploy command for a service, Render proceeds directly from the [build command](#build-command) to the [start command](#start-command). > **This command consumes [**pipeline minutes**](build-pipeline#pipeline-minutes) while running.** > > You receive an included allotment of pipeline minutes each month and can purchase more as needed. [View your usage.](https://dashboard.render.com/billing#included-usage) ### Start command Render runs this command to start your service when it's ready to deploy. #### Example start commands for each runtime | Runtime | Example Start Command(s) | |--------|--------| | Node.js | `yarn start` `npm start` `node index.js` | | Python | `gunicorn your_application.wsgi` | | Ruby | `bundle exec puma` | | Go | `./app` | | Rust | `cargo run --release` | | Elixir | `mix phx.server` | | Docker | By default, Render runs the `CMD` defined in your Dockerfile. You can specify a different command in the **Docker Command** field on your service's **Settings** page. > **To run multiple commands with Docker, provide those commands to `/bin/bash -c`.** For example, here's a Docker Command for a Django service that runs database migrations and then starts the web server: `/bin/bash -c python manage.py migrate && gunicorn myapp.wsgi:application --bind 0.0.0.0:10000` | ## Managing deploys ### Handling overlapping deploys Only one deploy can run at a time per service. Sometimes, a deploy will trigger while _another_ deploy is still in progress. When this occurs, your service can do one of the following: | Policy | Description | |--------|--------| | **Wait** | Allow the in-progress deploy to finish, then proceed directly to the most recently triggered deploy: [img] - In this case, Render skips any "intermediate" deploys, such as Deploy B in the timeline above. - We recommend this option for most workspaces, because it helps maintain a regular cadence of deploys during periods of high change volume. - This is the default policy for workspaces created **on or after 2025-07-14**. | | **Override** | Immediately cancel the in-progress deploy and start the new one. - This is the default policy for workspaces created **before 2025-07-14**. | You can set which of these policies to use for your workspace: 1. In the [Render Dashboard][dboard], open your workspace's **Settings** page. 2. Scroll down to the **Overlapping Deploy Policy** section and click **Edit**: [img] 3. Select an option and click **Save changes**. ### Canceling a deploy You can cancel an in-progress deploy in the [Render Dashboard][dboard]: 1. Go to your service's **Events** page and click the word **Deploy** in the corresponding event entry. - This opens the deploy's details page. 2. Click **Cancel deploy**: [img] If you cancel an in-progress deploy while another deploy is [waiting](#handling-overlapping-deploys), Render immediately kicks off the waiting deploy. ### Restarting a service If your service is misbehaving, you can perform a restart from the service's page in the [Render Dashboard][dboard]. Click **Manual Deploy > Restart service**: [img] On Render, a service restart is actually a special form of [manual deploy](#manual-deploys): - Like any other deploy, Render creates a completely new instance of your service and swaps over to it when it's ready. - This makes restarting a [zero-downtime action](#zero-downtime-deploys). - If your service is [scaled](scaling) to multiple instances, a restart applies to all instances. - _Unlike_ other deploys, the new instance always uses the exact same Git commit and configuration as the running instance at the time of the restart. - This means that if you've recently updated your service's environment variables but haven't redeployed since then, restarting does _not_ incorporate those changes. ### Rolling back a deploy See [Rollbacks](rollbacks). ## Deployment concepts ### Ephemeral filesystem By default, Render services have an **ephemeral filesystem**. This means that any changes a running service makes to its filesystem are _lost_ with each deploy. To persist data across deploys, do one of the following: - Create and connect to a Render-managed datastore (Render [Postgres](postgresql) or [Key Value](key-value)). - Create and connect to a custom datastore, such as [MySQL](deploy-mysql) or [MongoDB](deploy-mongodb). - Attach a [persistent disk](disks) to your service. - Note the [limitations of persistent disks](disks#disk-limitations-and-considerations). ### Zero-downtime deploys Whenever you deploy a new version of your service, Render performs a sequence of steps to make sure the service stays up and available throughout the deploy process—even if the deploy fails. This **zero-downtime deploy** sequence applies to web services, private services, background workers, and cron jobs. Static sites _also_ update with zero downtime, but they're backed by a CDN and don't involve service instances. [Learn more about service types.](service-types#summary-of-service-types) > Adding a persistent disk to your service _disables_ zero-downtime deploys for it. [See details.](disks#disk-limitations-and-considerations) #### Sequence of events 1. When you push up a new version of your code, Render attempts to build it. - If the build fails, Render cancels the deploy, and your original service instance continues running without interruption. 2. If the build succeeds, Render attempts to spin up a _new_ instance of your service running the new version of your code. - **For web services and private services,** your _original_ instance continues to receive all incoming traffic while the new instance is spinning up: [diagram] 3. If the new instance spins up successfully (for web services, you can help verify this by setting up [health checks](health-checks)), Render updates your current deployed commit accordingly. - **For web services and private services,** Render also updates its networking configuration so that your _new_ instance begins receiving all incoming traffic: [diagram] 4. After 60 seconds, Render sends a `SIGTERM` signal to your app's process on the _original_ instance. - This signals your app to perform a [graceful shutdown](#graceful-shutdown). 5. If your app's process doesn't exit within its specified **shutdown delay** (default 30 seconds), Render sends a `SIGKILL` signal to force the process to terminate. - You can extend your service's shutdown delay. [See details.](#setting-a-shutdown-delay) [diagram] 6. For web services with [edge caching](web-service-caching) enabled, Render purges all of the service's cache entries. - This helps ensure that clients receive up-to-date content. [See details.](web-service-caching#invalidation-and-expiration) 7. The zero-downtime deploy is complete. **For services that are [scaled](scaling) to multiple instances,** Render performs steps 2-5 for one instance at a time. If _any_ new instance fails to become healthy during this process, Render cancels the entire deploy and reverts to instances running the previous version of your service. ### Graceful shutdown As part of deploying your service to a new instance, Render triggers a shutdown of the _current_ instance by sending your application a `SIGTERM` signal. Your application should define logic to perform a graceful shutdown in response to this signal. Common shutdown actions include: - Responding to remaining in-flight HTTP requests - Completing in-progress worker tasks (or marking them as failed so they're retried by other workers) - Terminating outbound connections to external services - Exiting with a zero status after other cleanup actions are complete If your service is still running after its configured **shutdown delay** (default 30 seconds), Render sends your application a `SIGKILL` signal. This terminates the application immediately with a non-zero status. #### Setting a shutdown delay If your service needs more than 30 seconds to complete a graceful shutdown, you can specify a longer shutdown delay (up to a maximum of 300 seconds) in one of the following ways: - Call the Render API's [Update service](https://api-docs.render.com/reference/update-service) endpoint and set the `maxShutdownDelaySeconds` field to the desired value. - Add the [`maxShutdownDelaySeconds`](blueprint-spec#maxshutdowndelayseconds) field to your service's associated `render.yaml` configuration. - Use this method if you manage your service with a [Blueprint](infrastructure-as-code). # Supported Languages Render natively supports *Node.js* / *Bun*, *Python*, *Ruby*, *Go*, *Rust*, and *Elixir*. While [creating a service](https://dashboard.render.com/create?type=web), just link your GitHub/GitLab/Bitbucket repo, choose the runtime for your language, and specify a branch to deploy. Plus, you can use virtually _any_ programming language if you [deploy your code as a Docker image](#docker-support). ## Set your language version By default, Render uses a recent, actively supported version of each natively supported language (listed in the table below). *However, we still recommend setting a language version for your service.* Doing so helps you ensure consistent behavior between Render and your other environments (such as development). See the table to learn how to set your language version: | Language | Default Version* | How to Set a Version | |--------|--------|--------| | [*Node.js*](node-version) | `22.16.0` | Set the `NODE_VERSION` [environment variable](configure-environment-variables#setting-environment-variables), or add a `.node-version` file to your project root containing only the version number: `21.1.0` For additional options, see [Setting Your Node.js Version](node-version). | | [**Bun**](bun-version) | `1.3.4` | Set the `BUN_VERSION` [environment variable](configure-environment-variables#setting-environment-variables): `1.1.0` | | [**Python**](python-version) | `3.13.4` | Set the `PYTHON_VERSION` [environment variable](configure-environment-variables#setting-environment-variables), or add a `.python-version` file to your project root containing only the version number: `3.12.11` For details, see [Setting Your Python Version](python-version). You can also set versions for the following package management tools: - [uv](uv-version) - [Poetry](poetry-version) | | [**Ruby**](ruby-version) | `3.4.4` | Set [the `ruby` directive](https://bundler.io/guides/gemfile_ruby.html) in your `Gemfile`, or add a `.ruby-version` file to your project root containing only the version number: `3.1.4` For details, see [Setting Your Ruby Version](ruby-version). | | **Go** | `1.25.0` | Render's native Go environment _always_ uses the latest stable Go `1.x` version. You can't set a different version unless you deploy a [Docker image](#docker-support). | | [**Rust**](rust-toolchain) | `stable` | Set the `RUSTUP_TOOLCHAIN` [environment variable](configure-environment-variables#setting-environment-variables), or add a `rust-toolchain` file to your project root containing only the toolchain version: `beta` For details, see [Specifying a Rust Toolchain](rust-toolchain). | | [**Elixir**](elixir-erlang-versions) | `1.18.4` | Set the `ELIXIR_VERSION` and/or `ERLANG_VERSION` [environment variables](configure-environment-variables#setting-environment-variables). If you don't set `ERLANG_VERSION`, Render automatically uses an Erlang version that's compatible with your `ELIXIR_VERSION`. For details, see [Setting Your Elixir and Erlang Versions](elixir-erlang-versions). | | **Other languages** | N/A | To use any language besides those listed above, deploy your code as a [Docker image](docker). | > **\*Render updates the default version for each language over time.** > > With the exception of Go and Rust, a particular service's default language version depends on when that service was first created. For details, see the version documentation for your language (linked from the table above). ### Minimum supported language versions Render services cannot use versions of certain languages earlier than those listed below: | Language | Minimum Supported Version | | -------- | ------------------------- | | Python | `3.7.3` | | Ruby | `3.1.0` | | Elixir | `1.12.0` | | Erlang | `24.3.4` | > *Render periodically updates the underlying version of Debian used by all services.* > > The language versions above correspond to the minimum supported versions for Debian 12.x [bookworm](https://www.debian.org/releases/bookworm/). ## Docker support When you deploy a Docker image on Render, it can use virtually _any_ programming language and framework. This is true regardless of whether you: - [Build your image on Render](docker#building-from-a-dockerfile), or - [Pull a prebuilt image](deploying-an-image) from your container registry. Learn more about [Docker versus native runtimes](docker#docker-or-native-runtime). # Build Pipeline Render's *build pipeline* handles the tasks that occur _before_ a new deploy of your service goes live. Depending on your service, these tasks might include: - Running your [build command](deploys#build-command) (`yarn`, `pip install`, etc.) - Running your [pre-deploy command](deploys#pre-deploy-command) (for database migrations, asset uploads, etc.) - Building an image from a Dockerfile All pipeline tasks consume [pipeline minutes](#pipeline-minutes). Each workspace receives an included monthly allotment of pipeline minutes, and you can purchase additional minutes as needed. [*Professional* workspaces](professional-features) and higher can enable the [Performance pipeline tier](#pipeline-tiers) to run pipeline tasks on larger compute instances. > View your current month's pipeline usage from your [Billing page](https://dashboard.render.com/billing#unbilled-charges). ## Pipeline tiers [*Professional* workspaces](professional-features) and higher can choose between two pipeline tiers: *Starter* and *Performance*. > Hobby workspaces always use the *Starter* tier. | Tier | Specs | Description | |--------|--------|--------| | **Starter** (default) | 2 CPU
8 GB RAM | *For Hobby workspaces,* includes 500 [pipeline minutes](#pipeline-minutes) per month. *For [Professional workspaces](professional-features) and higher,* includes 500 minutes _per member_ per month (shared among all members). Recommended unless your pipeline tasks require additional memory or CPU. | | **Performance** | 16 CPU
64 GB RAM | Available only for [*Professional* workspaces](professional-features) and higher. Runs tasks on compute instances with significantly higher memory and CPU. _Does not_ provide an included monthly allotment of [pipeline minutes](#pipeline-minutes). Performance pipeline minutes are billed at a higher rate than Starter minutes. Use this tier if your pipeline tasks require memory or CPU beyond what's provided by the Starter tier. | Specs and pricing details for each tier are available from your *Workspace Settings* page in the [Render Dashboard][dboard]. ### Setting your pipeline tier > *Your pipeline tier is a workspace-wide setting.* Every pipeline task across your workspace uses the same tier. 1. In the [Render Dashboard][dboard], go to your *Workspace Settings* page. 2. In the *Build Pipeline* section, select a pipeline tier. 3. Confirm your selection in the dialog that appears. ## Pipeline minutes While they're running, your builds and other pipeline tasks consume *pipeline minutes*. You can view your current month's usage from your [Billing page](https://dashboard.render.com/billing#unbilled-charges). > *Pipeline minutes are specific to their associated tier.* You can't use Starter minutes with the Performance tier or vice versa. ### Included minutes *Hobby* workspaces receive 500 [Starter-tier](#pipeline-tiers) pipeline minutes per month. [*Professional* workspaces](professional-features) and higher receive 500 Starter-tier minutes _per member_ per month (shared among all members). The Performance tier does _not_ provide an included monthly allotment of pipeline minutes. ### Running out of minutes If you run out of pipeline minutes during a given month, you automatically purchase an additional allotment of minutes for your current [tier](#pipeline-tiers), *unless*: - You've reached your monthly [spend limit](#setting-a-spend-limit), or - You haven't added a payment method. In the above cases, *Render stops running pipeline tasks* (including service builds!) for the remainder of the current month. You can reenable pipeline tasks by raising your spend limit (and adding a payment method if you haven't). ### Setting a spend limit You can set a maximum amount to spend on pipeline minutes each month. As long as you're under your limit for a given month, you automatically purchase an additional allotment of minutes whenever you run out. 1. In the [Render Dashboard][dboard], go to your *Workspace Settings* page. 2. In the *Build Pipeline* section, click *Set spend limit* (or *Edit* if you're editing an existing limit). 3. Specify a new limit in the dialog that appears. ## Build limits - Render cancels a build if any of the following occurs: - Memory usage exceeds the limit for your [pipeline tier](#pipeline-tiers). - Disk space usage exceeds 16 GB. - Your build command fails or times out (after 120 minutes). - Your pre-deploy command fails or times out (after 30 minutes). - Each Render service can have only one active build at a time. - Whenever a new build is initiated, Render cancels any in-progress build for the same service. - Builds don't have access to your running service instance's resources (such as memory or disk). - This is because pipeline tasks run on completely separate compute. # Deploy Hooks *Deploy hooks* enable you to trigger an on-demand deploy of your Render service with a single HTTP request. Use deploy hooks with: - CI/CD environments like GitHub Actions ([see an example](#using-with-github-actions)) - [Image-backed services](#deploying-from-an-image-registry) (to trigger a deploy when a new image is available) - Headless CMS systems like [Contentful](https://www.contentful.com/developers/docs/concepts/webhooks/) (to trigger a deploy when content changes) > *Looking for webhooks?* See [this article](webhooks). ## Triggering a deploy Each service has a secret *deploy hook URL*, available from its *Settings* tab in the [Render Dashboard][dboard]: [img] > *Your deploy hook URL is a secret!* > > Provide the URL only to people and systems you trust to trigger deploys. If you believe a deploy hook URL has been compromised, replace it by clicking *Regenerate Hook*. To trigger a deploy, send a basic `GET` or `POST` request to your service's deploy hook URL—no special headers required. ```bash curl https://api.render.com/deploy/srv-xyz… ``` ## Deploying from an image registry [Image-backed services](deploying-an-image) on Render pull and deploy a prebuilt Docker image from an external registry. These services do _not_ automatically redeploy if a new image is pushed to the registry. You can use a deploy hook to trigger a deploy whenever an updated image is available. To deploy a specific tag or digest, append the `imgURL` query parameter to your deploy hook URL: ```bash # Append a string with this format to your deploy hook URL. # This example deploys the image `nginx:1.26` from Docker Hub. # Note the URL-encoding. &imgURL=docker.io%2Flibrary%2Fnginx%401.26 ``` If you _don't_ provide this parameter, Render uses whichever tag or digest you've specified in the service's settings. > All components of `imgURL` _besides_ the tag or digest must match your service's default image URL. Otherwise, Render rejects the deploy request. ## Using with GitHub Actions You might want to trigger a service deploy from your CI/CD environment whenever certain conditions are met (such as when all of your tests pass). Let's set this up using deploy hooks and [GitHub Actions](https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions). ### 1. Create a repository secret Deploy hook URLs are secret values, so we need to make sure to [store ours as a secret](https://docs.github.com/en/actions/reference/encrypted-secrets) in our GitHub repo: 1. Go to your GitHub repo's **Settings** page. 2. Click **Secrets and variables > Actions**. 3. Click **New repository secret**. Create a secret with the name `RENDER_DEPLOY_HOOK_URL` and provide your deploy hook URL as the value: [img] ### 2. Add a GitHub workflow Now that we've added our deploy hook URL, let's create a GitHub workflow that uses it: 1. Create a `.github/workflows` directory in your repo if it doesn't already exist. GitHub Actions automatically detects and runs any workflows defined in this folder. 2. Add a YAML file to this directory to represent your new workflow. The [example below](#example-workflow) uses the file path `.github/workflows/ci.yml`. 3. Define logic in your workflow to trigger a deploy after any prerequisite steps succeed. [See the example.](#example-workflow) 4. Commit all of your changes. #### Example workflow This example workflow defines a job named `ci` that includes two steps (`Test` and `Deploy`). The workflow runs whenever any pull request is opened against `main`, or when commits are pushed to `main`. 1. The `Test` step runs the repo's defined unit tests. 2. The `Deploy` step executes a `curl` request to our deploy hook URL _only if_ the current branch is `main` _and_ the `Test` step succeeded. ```yaml # .github/workflows/ci.yml on: pull_request: push: branches: [main] jobs: ci: runs-on: ubuntu-latest steps: - uses: actions/checkout@v5 - name: Test run: | npm install npm run test - name: Deploy # Only run this step if the branch is main if: github.ref == 'refs/heads/main' env: deploy_url: ${{ secrets.RENDER_DEPLOY_HOOK_URL }} run: | curl "$deploy_url" ``` # Connect GitHub Connect your [GitHub](https://github.com) account to Render to start deploying apps and sites using any repo you have access to. Render automatically redeploys your project with every push to your linked branch (you can [disable this](deploys#disabling-auto-deploys)). Render can also spin up a [preview instance](service-previews) of your project with every opened pull request to help you validate changes. ## Setup 1. When you create your first service in the [Render Dashboard][dboard], you're prompted to connect your Git provider: [img] 2. Click *GitHub*. This redirects you to GitHub so you can authorize Render to access your repositories. 3. You're then redirected back to the Render Dashboard, which now displays a list of your GitHub repos: [img] You've successfully linked your GitHub account! Whenever you create a new service, select any available repo and click *Connect*. Then complete the remainder of the service creation flow. ## Pull request previews Render can automatically build and deploy a preview instance of your service for every pull request that's opened against your project. For details, see [Service previews](service-previews#pull-request-previews-git-backed). ## Git submodules If your repo defines a `.gitmodules` file at its root, Render automatically reads it and clones all specified [Git submodules](https://git-scm.com/book/en/v2/Git-Tools-Submodules) as part of your service's build process. > If your `.gitmodules` file includes _private_ submodules, Render can clone them only if your linked GitHub account has access to the corresponding private repository. ## Log in with GitHub In addition to deploying projects from GitHub, you can use your GitHub account to log in to the [Render Dashboard][dboard]. If you have an existing Render account that matches your GitHub account's primary email address, Render logs you in to that existing account automatically. Learn more about [managing login methods](login-settings#managing-login-methods). ## Troubleshooting If your GitHub deploys aren't working as expected, this might be caused by misconfiguration of Render's GitHub app. For example, it might be configured for the wrong set of repositories, or a repository that was previously public might have been made private. ### Fixing GitHub app permissions Visit [github.com/apps/render/installations/new](https://github.com/apps/render/installations/new). From here, you can install the app in a new organization or configure an existing installation: [img] From here, you can check the *Repository access* section to make sure your repository is included. [img] ### Team-specific issues If the creator of a Render service loses access to that service's connected GitHub repository, it can disrupt deploys for that service. You can update the Git credentials used to deploy a service from the service's *Settings* page in the [Render Dashboard][dboard]: [img] Before you make this change, make sure that the new credentials have access to the service's Git repository. # Connect GitLab Render connects with GitLab to deploy your apps and websites automatically on every push to your project. You can connect all your public and private projects on [gitlab.com](https://gitlab.com) to Render and use the *Render for GitLab* integration to create web services, static sites, background workers and more. You can also use *Render for GitLab* to automatically create Merge Request Preview URLs for your web apps and static sites. ## Connecting GitLab When you create your first service on Render you will have the option to connect GitLab on a screen that looks like this: [img] Clicking on *Connect GitLab* will redirect you to [gitlab.com](https://gitlab.com) where you can authorize *Render for GitLab* to access your repositories and install webhooks that allow us to act on repo updates. You will then be redirected back to Render where you will see a list of your GitLab repos. You can then proceed by clicking on the repo you'd like to use for your service. ## Merge Request Previews Render can automatically build and deploy GitLab merge requests if [pull request previews](service-previews) are enabled for your web service. Once enabled, you will see a comment from *Render for GitLab* when your merge request is created. It should look similar to this:
[img]
Render creates a unique URL for every merge request and builds and deploys the latest changes as they're pushed to the MR. MR servers are automatically deleted when the corresponding MR is merged or closed. ## Git Submodules Render will read a `.gitmodules` file at the root of your repo and automatically clone all [Git submodules](https://git-scm.com/book/en/v2/Git-Tools-Submodules) defined in it. Private submodules are cloned if they are owned by the same GitLab account as the base repository. ## Log in with GitLab In addition to using GitLab projects to deploy apps, you can also use your [gitlab.com](https://gitlab.com) account to sign up for Render and for subsequent logins. If you already have an account on Render that matches your primary GitLab email, you will be logged into the existing account automatically. Learn more about [managing login methods](login-settings#managing-login-methods). # Connect Bitbucket Connect your [Bitbucket](https://bitbucket.org/) account to Render to start deploying apps and sites using any repo you have access to. Render automatically redeploys your project with every push to your linked branch (you can [disable this](deploys#disabling-auto-deploys)). Render can also spin up a [preview instance](service-previews) of your project with every opened pull request to help you validate changes. ## Setup 1. When you create your first service in the [Render Dashboard][dboard], you're prompted to connect your Git provider: [img] 2. Click *Connect Bitbucket*. This redirects you to Bitbucket so you can authorize Render to access your repositories. 3. You're then redirected back to the Render Dashboard, which now displays a list of your Bitbucket repos: [img] You've successfully linked your Bitbucket account! Whenever you create a new service, click the *Connect* button for whichever repo you want to use for that service. Then complete the remainder of the service creation flow. ## Pull request previews Render can automatically build and deploy a preview instance of your service for every pull request that's opened against your project. For details, see [Service previews](service-previews#pull-request-previews-git-backed). ## Git submodules If your repo defines a `.gitmodules` file at its root, Render automatically reads it and clones all specified [Git submodules](https://git-scm.com/book/en/v2/Git-Tools-Submodules) as part of your service's build process. > If your `.gitmodules` file includes _private_ submodules, Render can clone them only if your linked Bitbucket account has access to the corresponding private repository. ## Log in with Bitbucket In addition to deploying projects from Bitbucket, you can use your Bitbucket account to log in to the [Render Dashboard][dboard]. If you have an existing Render account that matches your Bitbucket account's primary email address, Render logs you in to that existing account automatically. Learn more about [managing login methods](login-settings#managing-login-methods). # Deploying a specific commit > *Urgently need to deploy a recent build to revert an error?* See [Rollbacks](rollbacks). By default, after you connect your [GitHub](github)/[GitLab](gitlab)/[Bitbucket](bitbucket) repository, Render automatically builds and deploys the _latest_ commit from that repository's linked branch. The same is true for your service's [previews](service-previews) and [preview environments](preview-environments). If you ever want to deploy a _specific_ commit from your branch's history, see options below. > *Deploying a specific commit disables [*automatic deploys*](deploys#automatic-git-deploys) for the service.* You can reenable automatic deploys from the service's page in the [Render Dashboard][dboard]. > > If you reenable automatic deploys, Render once again automatically deploys the most recent commit for your linked branch. ## Deploying from the dashboard To manually deploy any commit from your repository, open your service's page in the [Render Dashboard][dboard] and click *Manual Deploy > Deploy a specific commit*: [img] Select a commit in the modal that appears, then click *Deploy Commit*. Render immediately kicks off a deploy. ## Deploying via webhook Every Render service has a [deploy hook URL](deploy-hooks) that you can use to trigger a deploy via an HTTP request. To deploy a specific commit via this hook, include a `ref` query parameter that specifies the commit SHA to deploy: ```bash # Full commit SHA https://api.render.com/deploy/srv-XXYYZZ?key=AABBCC&ref=baaa339926cb474b61c1f0e6297b024eaa09ac7d # Short commit SHA https://api.render.com/deploy/srv-XXYYZZ?key=AABBCC&ref=baaa339 ``` As shown, you can provide either a full or short commit SHA. A `GET` or `POST` request to the hook URL returns `200 OK` if the provided commit SHA is valid and a deploy has started. The request returns `404 Not Found` if the SHA is invalid. # Monorepo Support A *monorepo* is a single Git repository that contains the source code for multiple related applications: ```bash # A monorepo containing a Python backend and a JavaScript frontend 📁 my-monorepo | ├── README.md ├── 📁 backend │ ├── app.py │ ├── README.md │ ├── requirements.txt │ └── 📁 tests │ └── test_app.py └── 📁 frontend ├── 📁 components │ └── login.js ├── index.js ├── package.json ├── README.md └── 📁 src └── auth.js ``` You can deploy the individual apps in a monorepo as separate Render services. You can also configure each service to redeploy only if you make changes to its corresponding files: - **Set a service's [root directory](#setting-a-root-directory)** to ignore file changes _outside_ that directory. - **Set [build filters](#setting-build-filters)** to ignore file changes that match specific [path patterns](#filter-syntax). Specify any combination of a root directory and build filters to customize your service's autodeploy behavior. ## Setting a root directory By default, Render [automatically deploys](deploys#automatic-deploys) your service whenever you push _any_ changes to its linked Git branch. If you set a **root directory** for your service, Render only triggers an autodeploy if your changes affect files anywhere under that directory. This helps you avoid unnecessary deploys when working in a monorepo. > Files outside your service's root directory are not available to the service at build time or at runtime. Set your service's root directory in any of the following ways: **Dashboard** 1. In the [Render Dashboard][dboard], open the **Settings** page for the service you want to configure. 2. Scroll down to the **Build & Deploy** section and find the **Root Directory** setting: [img] 3. Click **Edit**. 4. Specify the root directory to use and click **Save Changes**. 5. In the dialog that appears, verify your service's build and start commands (which will now run relative to the new root directory). 6. Click **Update Fields**. **API** Set the `rootDir` field in a request to the Render API's [Update Service](https://api-docs.render.com/reference/update-service) endpoint. In the same request, update values for the following fields as needed to be relative to the new root directory: - `buildCommand` - `startCommand` - `preDeployCommand` - `dockerfilePath` - `dockerContext` - `staticPublishPath` **Blueprints (render.yaml)** > **Use this method only if you manage your services with [Blueprints](infrastructure-as-code).** 1. In your Blueprint's `render.yaml` file, add the `rootDir` key to the definition of each applicable service: ```yaml services: - type: web name: app-backend runtime: python rootDir: backend buildCommand: pip install -r requirements.txt startCommand: python app.py - type: web name: app-frontend runtime: node rootDir: frontend buildCommand: npm install startCommand: npm start ``` 2. For each of the following fields that your service uses, update the values as needed to be relative to the new root directory: - `buildCommand` - `startCommand` - `preDeployCommand` - `dockerfilePath` - `dockerContext` - `staticPublishPath` 3. Save and deploy your changes. If you don't set a root directory, Render uses the repository root as the default. ### Root-relative settings Render runs commands and interacts with files relative to your service's root directory. All of the following settings operate relative to the root directory: - Build command - Start command - Pre-deploy command - Publish directory - Dockerfile path - Docker build context directory If you _don't_ set a root directory for a monorepo-backed service, the service's build command might look like this: ```shell cd backend && go build -o app . # Starts at repository root ``` Setting the service's root directory to the `backend` directory simplifies the build command to this: ```shell go build -o app . # Starts in backend directory ``` ## Setting build filters Set **build filters** for your service to specify which files in your repo do (or don't) trigger an autodeploy when you push changes to them: [img] Configure your service's build filters in any of the following ways: **Dashboard** 1. In the [Render Dashboard][dboard], open the **Settings** page for the service you want to configure. 2. Scroll down to the **Build & Deploy** section and find the **Build Filters** setting. 3. Click **Edit**. 4. Click **+ Add Included Path** and/or **+ Add Ignored Path** as needed. 5. Enter the [path patterns](#filter-syntax) for all paths you want to include and ignore. 6. Click **Save Changes**. **API** Set the `buildFilter` field in a request to the Render API's [Update Service](https://api-docs.render.com/reference/update-service) endpoint. Here's an example payload: ```json { "buildFilter": { "paths": ["frontend/**"], "ignoredPaths": ["docs/**", "README.md"] } } ``` Note that the property name for included paths is `paths` (not `includedPaths`). **Blueprints (render.yaml)** > **Use this method only if you manage your services with [Blueprints](infrastructure-as-code).** Add the `buildFilter` key to the definition of each applicable service in your Blueprint's `render.yaml` file: ```yaml{7-12} services: - type: web name: app-frontend runtime: node rootDir: frontend buildCommand: npm install startCommand: npm start buildFilter: paths: - frontend/** ignoredPaths: - docs/** - README.md ``` ### Filter rules Build filters can define rules for included paths, ignored paths, or both: | Path type | Description | |--------|--------| | **Included paths** | Changes that match an included path **will** trigger an autodeploy, **unless** those files also match an ignored path. - If you specify at least one included path, all non-matching paths are automatically ignored (you don't need to specify them as ignored paths). - If you don't specify any included paths, _all_ file changes trigger an autodeploy unless they match an ignored path. | | **Ignored paths** | Changes that match an ignored path **will not** trigger an autodeploy, **even if** those files also match an included path. In other words, ignoring a path takes precedence over including it. | Build filter paths are always relative to your _repository_ root, even if you’ve set a different [root directory](#setting-a-root-directory). This means your build filters _can_ include paths from other directories in your repo. ### Filter syntax Build filters use **glob syntax** to define the patterns for included and ignored file paths. See supported wildcards and example usage below: | Syntax &
Description | Example | |--------|--------| | **`?`** Matches any single character except the file path separator `/` | `frontend/sample.?s` Matches: - frontend/sample.**t**s Does not match: - frontend/index.js - frontend/components/login.jsx | | **`*`** Matches zero or more characters except the file path separator `/` | `backend/util/*.go` Matches: - backend/util/**util**.go - backend/util/**util_test**.go Does not match: - backend/main.go - backend/util/readme.md | | **`*`** Matches zero or more directories or sub-directories | `*/readme.md` Matches: - readme.md - **backend**/readme.md - **frontend/src**/readme.md Does not match: - backend/main.go - frontend/index.js | | **`[abc]`** Matches one character specified in the bracket | `frontend/src/auth[nz].js` Matches: - frontend/auth**n**.js - frontend/auth**z**.js Does not match: - frontend/src/auth.js | | **`[^abc]`** Matches one character that is NOT specified in the bracket | `backend/build/[^ax]*.sh` Matches: - backend/build/**q**uemu.sh Does not match: - backend/build/x86.sh - backend/build/amd64.sh | | **`[lo-hi]`** Matches one character (c) within the range lo <= c <= hi | `backend/**/*[0-9].sh` Matches: - backend/build/x8**6**.sh - backend/build/amd6**4**.sh Does not match: - backend/build/quemu.sh | | **`[^lo-hi]`** Matches one character (c) that is NOT within the range lo <= c <= hi | `backend/build/*[^0-9].sh` Matches: - backend/build/quem**u**.sh Does not match: - backend/build/x86.sh - backend/build/amd64.sh | ## Using with service previews Your service's root directory and build filters also affect the creation of [pull request previews](service-previews#pull-request-previews-git-backed) (if you've enabled them). If you open a pull request that only modifies ignored files for a service, Render skips creating a preview instance for that pull request. A file might be ignored because it's outside your service's [root directory](#setting-a-root-directory), or because it matches one of your build filter's [ignored paths](#filter-rules). ## FAQ ###### Can I ignore changes to my repo's render.yaml file? **No.** Changes to `render.yaml` are always processed regardless of your build filters. [Blueprint syncs](infrastructure-as-code) are also unaffected by build filters. ###### Do build filters affect any deploys besides autodeploys? **No.** If you trigger a [manual deploy](deploys#manual-deploys) or update your service's configuration (such as its build command or start command), Render always proceeds with the deploy regardless of your build filters. ###### Do build filters do anything if I've disabled autodeploys? **Yes.** Although build filters don't affect your service's deploys in this case, they can still affect the creation of [pull request previews](#using-with-service-previews). ###### Do build filters and root directory work with preview environments? **Yes.** If you define the root directory or specify build filters for each service in your `render.yaml` file, Render only creates a [preview environment](preview-environments) if the files changed in a pull request match the root directory or build filter paths for at least one service. # Docker on Render Render fully supports Docker-based deploys. Your services can: - [Pull and run a prebuilt image](deploying-an-image) from a registry such as Docker Hub, or - [Build their own image](#building-from-a-dockerfile) at deploy time based on the Dockerfile in your project repo. > *Render also provides [native language runtimes](language-support) that don't require Docker.* > > If you aren't sure whether to use Docker or a native runtime for your service, see [this section](#docker-or-native-runtime). ## Docker deployment methods ### Pulling from a container registry To pull a prebuilt Docker image from a container registry and run it on Render, see [this article](deploying-an-image). ### Building from a Dockerfile Render can build your service's Docker image based on the Dockerfile in your project repo. To enable this, apply the following settings in the [Render Dashboard](https://dashboard.render.com/) during service creation: 1. Set the *Language* field to *Docker* (even if your application uses a language listed in the dropdown): [img] 2. If your Dockerfile is _not_ in your repo's root directory, specify its path (e.g., `my-subdirectory/Dockerfile`) in the *Dockerfile Path* field: [img] 3. If your build process will need to pull any private image dependencies from a container registry (such as Docker Hub), provide a corresponding credential in the *Registry Credential* field under *Advanced*: [img] Learn more about [adding registry credentials](deploying-an-image#credentials-for-private-images). 4. If Render should run a custom command to start your service instead of using the `CMD` instruction in your Dockerfile (this is uncommon), specify it in the *Docker Command* field under *Advanced*: [img] > *To run multiple commands, provide them to `/bin/bash -c`.* > > For example, here's a *Docker Command* for a Django service that runs database migrations and then starts the web server: > > ``` > /bin/bash -c python manage.py migrate && gunicorn myapp.wsgi:application --bind 0.0.0.0:10000 > ``` Note that you can't customize the command that Render uses to _build_ your image. 5. Specify the remainder of your service's configuration as appropriate for your project and click the **Deploy** button. You're all set! Every time a deploy is triggered for your service, Render uses [BuildKit](https://docs.docker.com/build/buildkit/) to generate an updated image based on your repo's Dockerfile. Render stores your images in a private, secure container registry. Your Docker-based services support [zero-downtime deploys](deploys#zero-downtime-deploys), just like services that use a native language runtime. ## Docker or native runtime? Render provides [native language runtimes](language-support) for **Node.js**, **Python**, **Ruby**, **Go**, **Rust**, and **Elixir**. If your project uses one of these languages and you don't _already_ use Docker, it's usually faster to get started with a native runtime. See [Your First Render Deploy](your-first-deploy). **You _should_ use Docker for your service in the following cases:** - Your project already uses Docker. - Your project uses a language that Render doesn't support natively, such as [PHP](deploy-php-laravel-docker) or a JVM-based language (such as Java, Kotlin, or Scala). - Your project requires OS-level packages that aren't included in Render's [native runtimes](native-runtimes). - With Docker, you have complete control over your base operating system and installed packages. - You need guaranteed reproducible builds. - Native runtimes receive regular updates to improve functionality, security, and performance. Although we aim to provide full backward compatibility, using a Dockerfile is the best way to ensure that your production runtime always matches local builds. Most platform capabilities are supported identically for Docker-based services and native runtime services, including: - [Zero-downtime deploys](deploys#zero-downtime-deploys) - Setting a [pre-deploy command](deploys#pre-deploy-command) to run database migrations and other tasks before each deploy - [Private networking](private-network) - Support for [persistent disk storage](disks) - [Custom domains](custom-domains) - Automatic [Brotli](https://en.wikipedia.org/wiki/Brotli) and [gzip](https://en.wikipedia.org/wiki/Gzip) compression - [Infrastructure as code](infrastructure-as-code) support with Render Blueprints ## Docker-specific features ### Environment variable translation If you set [environment variables](configure-environment-variables) for a Docker-based service, Render automatically translates those values to [Docker build arguments](https://docs.docker.com/build/building/variables/#arg-usage-example) that are available during your image's build process. These values are also available to your service at runtime as standard environment variables. > **In your Dockerfile, do not reference any build arguments that contain sensitive values (such as passwords or API keys).** > > Otherwise, those sensitive values might be included in your generated image, which introduces a security risk. If you need to reference sensitive values during a build, instead add a secret file to your build context. For details, see [Using Secrets with Docker](docker-secrets). ### Image builds - Render supports parallelized [multi-stage](https://docs.docker.com/develop/develop-images/multistage-build/) builds. - Render omits files and directories from your build context based on your `.dockerignore` file. ### Image caching Render caches all intermediate build layers in your Dockerfile, which significantly speeds up subsequent builds. To further optimize your images and improve build times, follow [these instructions from Docker](https://docs.docker.com/build/building/best-practices/). Render also maintains a cache of public images pulled from container registries. Because of this, pulling an image with a mutable tag (e.g., `latest`) might result in a build that uses a cached, less recent version of the image. To ensure that you _don't_ use a cached public image, do one of the following: - Reference an immutable tag when you deploy (e.g., a specific version like `v1.2.3`) - Add a credential to your image. For details, see [Credentials for private images](deploying-an-image#credentials-for-private-images). ## Popular public images See quickstarts for deploying popular open-source applications using their official Docker images: *Infrastructure components* - [ClickHouse](deploy-clickhouse) - [Elasticsearch](deploy-elasticsearch) - [MongoDB](deploy-mongodb) - [MySQL](deploy-mysql) - [n8n](deploy-n8n) - [Temporal](deploy-temporal) *Blogging and content management* - [Ghost](deploy-ghost) - [Wordpress](deploy-wordpress) *Analytics and business intelligence* - [Ackee](deploy-ackee) - [Fathom Analytics](deploy-fathom-analytics) - [GoatCounter](deploy-goatcounter) - [Matomo](deploy-matomo) - [Metabase](deploy-metabase) - [Open Web Analytics](deploy-open-web-analytics) - [Redash](deploy-redash) - [Shynet](deploy-shynet) *Communication and collaboration* - [Forem](deploy-forem) - [Mattermost](deploy-mattermost) - [Zulip](deploy-zulip) # Deploy a Prebuilt Docker Image You can deploy a prebuilt Docker image to any of the following Render service types (if the image meets the [necessary requirements](#image-requirements)): - Web services - Private services - Background workers - Cron jobs You can deploy public images from any registry Render can reach, and you can deploy _private_ images from the following registries: - Docker Hub - GitHub Container Registry - GitLab Container Registry - Google Artifact Registry - AWS Elastic Container Registry (ECR) A Render service that deploys a prebuilt Docker image is an *image-backed service*, as opposed to a *Git-backed service* that [deploys commits from your Git repository](deploying-a-commit). ## Setup 1. In the [Render Dashboard](https://dashboard.render.com/), click *+ New* and select a service type to deploy. The service creation form appears. 2. Under *Source Code*, click *Existing Image*: [img] 3. Provide the image's URL, along with any required credentials for accessing the image if it's private. - Learn more about [private image credentials](#credentials-for-private-images). - The *Image URL* field uses default values for an image's host (`docker.io`), namespace (`library`), and tag (`latest`) if you don't provide them. The following values all resolve to the same URL: - `docker.io/library/alpine:latest` - `docker.io/library/alpine` - `library/alpine` - `alpine` - You can specify an image _digest_ instead of a tag, such as: ``` docker.io/library/alpine@sha256:c0669ef34cdc14332c0f1ab0c2c01acb91d96014b172f1a76f3a39e63d1f0bda ``` 4. Render verifies that it can access the image using any credentials you provide. After this succeeds, click the now-enabled **Connect** button. The remainder of the service creation form becomes active. 5. Configure your service's details (name, region, instance type, and so on). You can click **Advanced** for additional configuration options, such as specifying a custom Docker `CMD` for the service. 6. Click the **Deploy** button. You're all set! Render pulls the image from the registry and kicks off the service's initial deploy. ## Credentials for private images To deploy a private Docker image on Render, you need to provide a valid credential to pull that image. Render can pull private images from the following container registries: - Docker Hub - GitHub Container Registry - GitLab Container Registry - Google Artifact Registry - AWS Elastic Container Registry (ECR) You specify your image's credential as part of [creating your new service](#setup): [img] The **Credential** dropdown includes any _existing_ credentials you've already added to your workspace. You can reuse the same credential across multiple services. If you click **Add credential**, the following dialog appears: [img] Provide the following details: | Field | Description | |--------|--------| | **Name** | An identifying name for the credential. This value is for reference only. | | **Registry** | The container registry to pull from. | | **Username** | The username of the container registry account to use when authenticating. | | **Personal Access Token** | The registry-generated token that grants permission to access the image. [Learn how to generate a personal access token.](#generating-a-personal-access-token) For Docker Hub only, you can provide your password instead of a personal access token. | ### Generating a personal access token To generate a personal access token for Render to access your private Docker image, see the instructions for your container registry: #### Docker Hub Your personal access token requires access permissions that allow reading private images. - [Token creation page](https://hub.docker.com/settings/security?generateToken=true) - [Docker Hub documentation](https://docs.docker.com/docker-hub/access-tokens/) #### GitHub Your personal access token requires the `read:packages` permission to pull private images. - [Token creation page](https://github.com/settings/tokens/new?description=&scopes=read%3Apackages) - [GitHub documentation](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-to-the-container-registry) #### GitLab Your personal access token requires the `read_registry` permission to pull private images. - [Token creation page](https://gitlab.com/-/profile/personal_access_tokens?scopes=read_registry) - [GitLab documentation](https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html#create-a-personal-access-token) #### Google Artifact Registry Your service account requires the `roles/artifactregistry.reader` permission to pull private images. - [Service account creation page](https://console.cloud.google.com/projectselector/iam-admin/serviceaccounts/create?walkthrough_id=iam--create-service-account#step_index=1) - [Google documentation](https://cloud.google.com/artifact-registry/docs/docker/authentication#json-key) #### AWS ECR - For your **Username**, provide the AWS Account ID for the account that owns the AWS ECR repository. - For your **Personal Access Token**, provide the password generated by the command `aws ecr get-login-password`. > **ECR passwords expire after 12 hours.** > > To maintain a valid ECR credential, you need to generate a new password and apply it to the credential every 12 hours. You can update your credential programmatically using the Render API's [Update Registry Credential endpoint](https://api-docs.render.com/reference/update-registry-credential). - [Find your AWS Account ID](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-identifiers.html#FindAccountId) - [AWS ECR authorization token instructions](https://docs.aws.amazon.com/AmazonECR/latest/userguide/registry_auth.html#registry-auth-token) ### Managing credentials You can manage your registry credentials from the **Container Registry Credentials** section of your Workspace Settings page: [img] From this section, you can: - Add a new credential - Remove a credential if it isn't used by any service > **You can't remove a credential if at least one service uses it.** > > First, create a _new_ credential and apply it to any services that currently use the credential you want to remove. ## Triggering a deploy Image-backed services do _not_ automatically redeploy whenever a new image is associated with their assigned tag (e.g., `latest`). Instead, you can redeploy using any of the following methods: ### Deploy from the Render Dashboard In the [Render Dashboard](https://dashboard.render.com/), select your image-backed service and then click **Manual Deploy > Deploy latest reference**: [img] This kicks off a deploy that pulls the image that's currently associated with the service's assigned tag. ### Deploy via webhook Each Render service has a [deploy hook](deploy-hooks) URL you can use to trigger a deploy via a `GET` or `POST` request. Your service's deploy hook URL is available from its Settings page in the [Render Dashboard](https://dashboard.render.com/): [img] When you deploy an image-backed service this way, you can optionally specify a tag or digest by appending an `imgURL` query parameter to the deploy hook URL: ```bash # Append a string with this format to your deploy hook URL. # This example deploys the image `nginx:1.26` from Docker Hub. # Note the URL-encoding. &imgURL=docker.io%2Flibrary%2Fnginx%401.26 ``` If you do, Render pulls and deploys the image for the specified tag or digest, _instead of_ using the tag or digest in your service's settings. Note that for _future_ deploys, your service continues to use the tag or digest in its settings. > All components of `imgURL` _besides_ the tag or digest must match your service's default image URL. Otherwise, Render rejects the deploy request. Here's an example deploy hook URL that sets its `imgURL` to `docker.io/library/nginx:1.26` (note the required URL encoding): ``` https://api.render.com/deploy/srv-XXYYZZ?key=AABBCC&imgURL=docker.io%2Flibrary%2Fnginx%401.26 ``` If request successfully kicks off a deploy, Render returns a `200` response. If you provide an invalid `imgURL`, Render returns a `404` response. ### Running pre-deploy tasks To run tasks like database migrations or asset uploads before each deploy of your prebuilt image, you can set a [pre-deploy command](deploys#deploy-steps) for your service. ## Image requirements Before you deploy a particular Docker image to a Render service, note the following requirements: #### `linux/amd64` platform The Docker image must be built for the `linux/amd64` platform. To ensure this, do one of the following: - Add a [`FROM` instruction](https://docs.docker.com/engine/reference/builder/#from) to your `Dockerfile`: ```Dockerfile FROM --platform=linux/amd64 ``` - Specify the platform as a CLI flag when building the image: ```shell docker build --platform=linux/amd64 ``` #### Image size The Docker image's compressed size cannot exceed 10 GB. #### Rollback support To successfully [roll back a deploy](rollbacks) of an image-backed service, the image and digest used for the deploy you roll back to must be available in the container registry. Otherwise, the rollback deploy fails. ## Pulling images **Render does not store previously pulled Docker images.** Instead, Render pulls your service's associated image from your container registry for _every deploy_. In the case of cron jobs, Render pulls the associated image every time the cron job runs. In addition to pulling on every deploy, Render needs to pull an image in cases like the following: - You restart your service, and it's scheduled on a machine that doesn't have a locally cached copy of the image. - You create additional instances of your service via manual or automatic [scaling](scaling). - Your service's underlying hardware is retired or experiences a failure, and your service needs to be rescheduled on a new machine. Because Render relies on your container registry in all of these cases, make sure the images you use are always available in your registry! ### Image pull failures If Render fails to pull an image from your registry, an `Image Pull Failed` service event appears on the Render Dashboard. Additionally, Render sends a deploy failure notification if you've [enabled notifications](notifications) for the service. An image pull might fail for any of the following reasons: - The image no longer exists in the registry. - Your service uses a private image, and the [credential you've provided](#credentials-for-private-images) is no longer valid. - The registry is down or experiencing issues. When you encounter an image pull failure, first go to your service's Settings page on the [Render Dashboard](https://dashboard.render.com/). Under the *Deploy* section, verify the service's image URL and credential. If you need additional assistance, reach out to support@render.com. # Using Secrets with Docker Docker services can access environment variables and secret files like other kinds of services at run time. However, because of the way that Docker builds work, you won't have access to environment variables and secret files as usual at build time. ## Security Before going into how to use your environment variables and secret files for Docker builds, you should know that using secrets with Docker can result in your image containing sensitive information. Although we store your images securely, Docker registries should be treated like code repositories: it's best practice to not store secrets in them. You should avoid using secrets in your Docker builds to eliminate the chance of accidentally storing sensitive material. That being said, some build processes _require_ credentials to access private resources, for example. For these, it's best to use [secret files](#secret-files-in-docker-builds). ## Secret Files in Docker Builds The best way to use secrets in your Docker build is with secret files. Unlike build args, secret mounts aren't persisted in your built image. Secret files in Docker builds make use of secret mounts which are available with Dockerfile syntax v1.2. At the top of your Dockerfile, add ```dockerfile # syntax = docker/dockerfile:1.2 ``` Then, add `--mount=type=secret,id=FILENAME,dst=/etc/secrets/FILENAME` to your run `RUN` instructions, replacing `FILENAME` with the name of your secret file. If your filename contains non-alphanumeric characters, replace them with `_` for the `id=` part. For example, if you have a secret file named `.env`, then using ```dockerfile RUN --mount=type=secret,id=_env,dst=/etc/secrets/.env cat /etc/secrets/.env ``` will print the content of `.env` in your build. You can make use of **multiple secret files** by adding more `--mount=type=secret,...`. > The --mount=type=secret,... needs to be included for every instruction that requires the secret file. Read more about Docker secrets and secret mounts in the [Docker Docs](https://docs.docker.com/develop/develop-images/build_enhancements/#new-docker-build-secret-information). ### Building Images with Secrets Locally To build images locally with Dockerfiles that make use of secrets, you need to have a recent version of Docker installed. When you run `docker build`, ensure that BuildKit is enabled with the `DOCKER_BUILDKIT=1` and pass in secrets using the `--secret` argument like so: ```bash DOCKER_BUILDKIT=1 docker build --secret id=FILENAME,src=LOCAL_FILENAME ... ``` `FILENAME` is the same as the ID from `--mount=type=secret,id=FILENAME,...` in your Dockerfile and `LOCAL_FILENAME` is an appropriate secret file located on your build host. Read more about Docker secrets and secret mounts in the [Docker Docs](https://docs.docker.com/develop/develop-images/build_enhancements/#new-docker-build-secret-information). ## Accessing Secret Files at Runtime If you add [secret files](configure-environment-variables#secret-files) to a Docker-based service, those services are available at runtime at `/etc/secrets/`. When accessing secret files in Docker services, you might encounter permission errors like the following: ``` cp: cannot open '/etc/secrets/myfile' for reading: Permission denied ``` To resolve this, make sure your application user is in group `1000`. You can set this in your Dockerfile: ```dockerfile # Alpine-based images do not have usermod by default and must install it: # RUN apk add shadow # Add your application user to group 1000 RUN usermod -a -G 1000 your-app-user ``` ## Environment Variables in Docker Builds Docker doesn't provide a way to pass in environment variables to a build. It does, however, provide build args. Render injects your service's environment variables as build args with the same keys and values. You can make use of build args in your Dockerfile using the [`ARG` instruction](https://docs.docker.com/engine/reference/builder/#arg). > We recommend against using ARG instructions for secrets. Consider > using secret files instead for > build-time secrets. # Native Runtimes Render services provide *native runtimes* that enable you to build and deploy your application using common language environments. Render's native runtimes include: - Automated builds and deploys for supported languages in both public and private Git repositories - [Infrastructure as Code](infrastructure-as-code) support with [`render.yaml`](blueprint-spec) - Regular updates to native runtimes to improve functionality, security, and performance All native runtimes come with standard Render features like: - [Private networking](private-services), load balancing, and service discovery - [Persistent block storage](disks) - Automatic [Brotli](https://en.wikipedia.org/wiki/Brotli) and [gzip](https://en.wikipedia.org/wiki/Gzip) compression for faster responses - Easy HTTP [health checks](deploys#health-checks) and [zero-downtime deploys](deploys#zero-downtime-deploys). - Automatic [pull request previews](service-previews) - Native HTTP/2 support - [DDoS protection](ddos-protection) - Automatic HTTP → HTTPS redirects ## Available Runtimes Render provides native runtimes for Node.js / Bun, Python, Ruby, Go, Rust, and Elixir. For details, see [Supported Languages](language-support). ### Changing a service's runtime If you've recently created a service with an incorrect runtime, the fastest fix is usually to create a _new_ service with the correct runtime. You can also change an existing service's runtime in any of the following ways: - Make an HTTP call to the Render API's [Update service](https://api-docs.render.com/reference/update-service) endpoint. - Specify a new `runtime` via the `serviceDetails` parameter you provide in your request. - If you're managing your service with [Render Blueprints](infrastructure-as-code), update the service's `runtime` field in your `render.yaml` file, then sync your Blueprint. ## Tools and utilities The tools and utilities listed below are available for native builds and deploys. If your build requires a tool that _isn't_ listed below, you can [deploy with Docker](docker) instead of building natively. ### Builds - bun - curl - ffmpeg - g++ - gcc - gettext - git - gnupg2 - jq - libvips-dev - libvips-tools - make - nano - node - npm - pandoc - pigz - pnpm - postgresql-client - princexml - python3-dev - python3-pip - python3-setuptools - rsync - sqlite3 - swig - typescript - unzip - vim - webpack - wget - yarn - zip ### Deploys - bun - curl - ffmpeg - g++ - gcc - gettext - ghostscript - git - gnupg2 - imagemagick - jq - libvips-dev - libvips-tools - make - nano - node - npm - pandoc - pigz - pnpm - postgresql-client - postgresql-client-12 - postgresql-client-13 - postgresql-client-14 - princexml - python3-dev - python3-pip - python3-setuptools - rsync - sqlite3 - swig - typescript - unzip - vim - webpack - wget - yarn - zip # Environment Variables and Secrets You can (and should!) use *environment variables* to configure your Render services: [img] Environment variables enable you to customize a service's runtime behavior for different environments (such as development, staging, and production). They also protect you from committing secret credentials (such as API keys or database connection strings) to your application source. In addition to setting environment variables, you can: - Upload plaintext [secret files](#secret-files) to Render that are available from your service's file system at runtime. - Create [environment _groups_](#environment-groups) to share a collection of environment variables and secret files across multiple Render services. ## Setting environment variables > Render sets default values for certain environment variables. [See the list.](environment-variables) ### In the Render Dashboard 1. In the [Render Dashboard](https://dashboard.render.com/), select the service you want to add an environment variable to. 2. Click *Environment* in the left pane. 3. Under *Environment Variables*, click *+ Add Environment Variable*. - You can also click *Add from .env* to [add environment variables in bulk](#adding-in-bulk-from-a-env-file). 4. Provide a *Key* and *Value* for each new environment variable. 5. Save your changes. You can select one of three options from the dropdown: [img] - *Save, rebuild, and deploy:* Render triggers a new build for your service and deploys it with the new environment variables. - *Save and deploy:* Render redeploys your service's _existing_ build with the new environment variables. - *Save only:* Render saves the new environment variables _without_ triggering a deploy. Your service will not use the new variables until its next deploy. That's it! Render saves your environment variables and then kicks off a deploy (unless you selected *Save only*). #### Adding in bulk from a `.env` file If you have a local `.env` file, you can bulk-add its environment variables to your service by clicking *Add from .env* on your service's *Environment* page. Your file must use valid `.env` syntax. Here are some valid variable declarations: ```bash # Value without quotes (doesn't support whitespace) KEY_1=value_of_KEY_1 # Value with quotes (supports whitespace) KEY_2="value of KEY_2" # Multi-line value KEY_3="-----BEGIN----- value of KEY_3 -----END-----" ``` ### Via Blueprints If you're using Render [Blueprints](infrastructure-as-code) to represent your infrastructure as code, you can declare environment variables for a service directly in your `render.yaml` file. > **Don't commit the values of secret credentials to your `render.yaml` file!** Instead, you can declare [placeholder environment variables](blueprint-spec#prompting-for-secret-values) for secret values that you then populate from the Render Dashboard. Here are common patterns for declaring environment variables in a Blueprint: ```yaml envVars: - key: NODE_ENV value: staging # Set NODE_ENV to the hardcoded string 'staging' - key: APP_SECRET generateValue: true # Render generates a random base64-encoded, 256-bit secret for APP_SECRET - key: DB_URL fromDatabase: # Set DB_URL to the connection string for the db 'mydb' name: mydb property: connectionString - key: MINIO_ROOT_PASSWORD fromService: # Copy the MINIO_ROOT_PASSWORD from the private service 'minio' type: pserv name: minio envVarKey: MINIO_ROOT_PASSWORD - key: STRIPE_API_KEY sync: false # For security, provide STRIPE_API_KEY in the Render Dashboard - fromGroup: my-env-group # Link the 'my-env-group' environment group to this service ``` For more details and examples, see the [Blueprint Specification](blueprint-spec#environment-variables). ## Secret files You can upload **secret files** to Render to make those files available to your service at runtime. These are plaintext files that usually contain one or more secret credentials, such as a private key. > The combined size of all secret files uploaded to any given service or [environment group](#environment-groups) cannot exceed 1 MB. 1. In the [Render Dashboard](https://dashboard.render.com/), select the service you want to add a secret file to. 2. Click **Environment** in the left pane. 3. Under **Secret Files**, click **+ Add Secret File**. - You can click the button multiple times to add multiple files. 4. Provide a **Filename** for the secret file. - At runtime, the secret file is available at `/etc/secrets/`. - For non-Docker services, the file is _also_ available in your service's root directory. - To access the secret file from a Docker-based service, see [Accessing secret files at runtime.](docker-secrets#accessing-secret-files-at-runtime) 5. Click the **Contents** field to paste in the file's contents. 6. Click **Save Changes**. That's it! Render kicks off a new deploy of your service to make the secret file available. ## Environment groups **Environment groups** are collections of environment variables and/or [secret files](#secret-files) that you can link to any number of different services. They're a helpful way to distribute configuration across a [multi-service architecture](multi-service-architecture) using a single source of truth: [diagram] ### Creating an environment group 1. In the [Render Dashboard][dboard], click **Environment Groups** in the left pane. 2. Click **+ New Environment Group**. The following form appears: [img] 3. Provide a helpful **Group Name**. 4. Provide the keys and values for any environment variables you want to add to the group. 5. Upload any [secret files](#secret-files) you want to add to the group. 6. Click **Create Environment Group**. The newly created group appears in the list on your **Env Groups** page. ### Linking a group to a service After you [create an environment group](#creating-an-environment-group), you can link it to any number of different services. You can link multiple environment groups to a single service. > **Important precedence details:** > > - **Avoid variable collisions when linking multiple environment groups.** Render _does not guarantee_ its precedence behavior when multiple linked environment groups define the same environment variable. > - Currently, Render uses the value from the _most recently created_ environment group. **This behavior might change in the future without notice.** > - If a service defines an environment variable in its individual settings, that value always takes precedence over any linked environment groups that also define the variable. Render _does_ guarantee this behavior. 1. In the [Render Dashboard](https://dashboard.render.com/), select the service you want to link an environment group to. 2. Click **Environment** in the left pane. 3. Under **Linked Environment Groups**, select a group from the dropdown and click **Link**. That's it! Render kicks off a new deploy of your service to incorporate the values from the linked environment group. ### Modifying a group You can modify an existing environment group from your **Env Groups** page in the [Render Dashboard][dboard]. You can add new values, replace existing values, and so on. If you make changes to an environment group (including deleting it), Render kicks off a new deploy for every linked service that has autodeploys enabled. ### Scoping a group to a single environment You can create [projects](projects) to organize your services by their application and environment (such as staging or production). You can then scope an environment group to only the services in a single project environment. If you do, you can't link the group to any service _outside_ that environment. This helps ensure that your services use exactly the configuration you expect. > If an environment group _doesn't_ belong to a particular project environment, you can link it to _any_ service in your team—including services that _do_ belong to an environment. 1. From your environment group's details page, click **Manage > Move group**: [img] (This option doesn't appear if you haven't created any projects.) 2. In the dialog that appears, select a project and environment to move to. 3. Click **Move env group**. After you move a group to a particular environment, it appears on the associated project's page: [img] Note that you still need to link the group to any applicable services in the environment. ## Reading environment variables from code Each programming language provides its own mechanism for reading the value of an environment variable. Below are basic examples of reading the environment variable `DATABASE_URL`. > **Environment variable values are always strings.** > > In your application logic, perform any necessary conversions for variable values that represent other data types, such as `"false"` or `"10000"`. #### JavaScript ```js const databaseUrl = process.env.DATABASE_URL ``` #### Python ```python import os database_url = os.environ.get('DATABASE_URL') ``` #### Ruby ```ruby database_url = ENV['DATABASE_URL'] ``` #### Go ```go package main import "os" func main() { databaseURL := os.Getenv("DATABASE_URL") } ``` #### Elixir ```elixir database_url = System.get_env("DATABASE_URL") ``` ## Setting environment variables locally ### Using `export` To set environment variables in your local environment, you can use the `export` command in your terminal: ```shell export KEY=value ``` ### Using a `.env` file It can be useful to create a local `.env` file at the root of your local project that lists the names and values of environment variables, like so: ```bash KEY1=value1 KEY2=value2 ``` Many languages have a library for reading a `.env` file, such as [dotenv](https://www.npmjs.com/package/dotenv) for Node.js and [python-dotenv](https://github.com/theskumar/python-dotenv) for Python. If you use a `.env` file, you can [bulk-add its environment variables](#adding-in-bulk-from-a-env-file) to your Render service. > **Do not commit your `.env` file to source control!** This file often contains secret credentials. To avoid accidentally committing it, add `.env` to your project's `.gitignore` file. # Default Environment Variables Render automatically sets the values of certain environment variables for your service. Unless otherwise noted, these environment variables are available at both build time and runtime. > *Environment variable values are always strings.* > > In your application logic, perform any necessary conversions for variable values that represent other data types, such as `"false"` or `"10000"`. ## By runtime ### All runtimes ###### `IS_PULL_REQUEST` This value is `true` for [pull request previews](service-previews) and `false` otherwise. Note that these are the _string_ values `"true"` and `"false"`. Convert to booleans as needed. ###### `RENDER` This value is always `true`. Your code can check this value to detect whether it's running on Render. ###### `RENDER_CPU_COUNT` The number of CPUs available for this service, based on its [instance type](pricing#services). For example, this value is `0.5` for the Starter instance type and `2` for the Pro instance type. Note that these are the _string_ values `"0.5"` and `"2"`. Convert to numbers as needed. ###### `RENDER_DISCOVERY_SERVICE` The Render DNS name used to discover all running instances of a [scaled service](scaling). Has the format `$RENDER_SERVICE_NAME-discovery`. ###### `RENDER_EXTERNAL_HOSTNAME` For a web service or static site, this is the service's `onrender.com` hostname (such as `myapp.onrender.com`). For other service types, this value is empty. ###### `RENDER_EXTERNAL_URL` For a web service or static site, this is the service's full `onrender.com` URL (such as `https://myapp.onrender.com`). For other service types, this value is empty. ###### `RENDER_GIT_BRANCH` The Git branch for a service or deploy. ###### `RENDER_GIT_COMMIT` The commit SHA for a service or deploy. ###### `RENDER_GIT_REPO_SLUG` Has the format `$username/$reponame`. ###### `RENDER_INSTANCE_ID` The unique identifier of the current service instance. Useful for [scaled services](scaling) with multiple instances. ###### `RENDER_SERVICE_ID` The service's unique identifier. Used in the [Render API](api). ###### `RENDER_SERVICE_NAME` A unique, human-readable identifier for a service. ###### `RENDER_SERVICE_TYPE` The current service's [type](service-types). One of `web`, `pserv`, `cron`, `worker`, `static`. ###### `RENDER_WEB_CONCURRENCY` For a web service or private service, this is the recommended number of concurrent web processes for handling requests. This is based on the number of CPUs available on the service's [instance type](pricing#services). For example, this value is `1` for the Starter instance type and `2` for the Pro instance type. Note that these are the _string_ values `"1"` and `"2"`. Convert to numbers as needed. This is only available at runtime. At build time or for other service types, this value is empty. ###### `WEB_CONCURRENCY` For a web service or private service created after December 8th 2025, this defaults to the recommended number of concurrent web processes for handling requests. This is based on the number of CPUs available on the service's [instance type](pricing#services). For example, this value is `1` for the Starter instance type and `2` for the Pro instance type. Note that these are the _string_ values `"1"` and `"2"`. Convert to numbers as needed. This is only available at runtime. At build time, for other service types, or for web and private services created before the cutoff date, this value is empty. > *Other environment variables starting with `RENDER_` might be present in your build and runtime environments.* > > However, variables not listed above are strictly for internal use and might change without warning. ### Docker Render does not provide additional environment variables on top of what's listed under [All runtimes](#all-runtimes). ### Elixir ###### `MIX_ENV` `prod` ###### `RELEASE_DISTRIBUTION` `name` ### Go ###### `GO111MODULE` `on` ###### `GOPATH` `/opt/render/project/go` ### Node.js ###### `NODE_ENV` `production` (runtime only) ###### `NODE_MODULES_CACHE` `true` ### Python 3 ###### `CI` `true` (build time only) ###### `FORWARDED_ALLOW_IPS` `*` ###### `GUNICORN_CMD_ARGS` `--preload --access-logfile - --bind=0.0.0.0:10000` ###### `PIPENV_YES` `true` ###### `VENV_ROOT` `/opt/render/project/src/.venv` ### Ruby ###### `BUNDLE_APP_CONFIG` `/opt/render/project/.gems` ###### `BUNDLE_BIN` `/opt/render/project/.gems/bin` ###### `BUNDLE_DEPLOYMENT` `true` ###### `BUNDLE_PATH` `/opt/render/project/.gems` ###### `GEM_PATH` `/opt/render/project/.gems` ###### `MALLOC_ARENA_MAX` `2` ###### `PASSENGER_ENGINE` `builtin` ###### `PASSENGER_ENVIRONMENT` `production` ###### `PASSENGER_PORT` `10000` ###### `PIDFILE` `/tmp/puma-server.pid` ###### `RAILS_ENV` `production` ###### `RAILS_SERVE_STATIC_FILES` `true` ###### `RAILS_LOG_TO_STDOUT` `true` ### Rust ###### `CARGO_HOME` `/opt/render/project/.cargo` ###### `ROCKET_ENV` `prod` ###### `ROCKET_PORT` `10000` (runtime only) ###### `RUSTUP_HOME` `/opt/render/project/.rustup` ## Optional environment variables You can set these environment variables to modify the default behavior for your services. ### All runtimes ###### `PORT` For [web services](web-services), specify the port that your HTTP server binds to. The default port is `10000`. ### Elixir ###### `ELIXIR_VERSION` See [Setting Your Elixir and Erlang Versions](elixir-erlang-versions). ###### `ERLANG_VERSION` See [Setting Your Elixir and Erlang Versions](elixir-erlang-versions). ### Node.js ###### `SKIP_INSTALL_DEPS` Set this to `true` to skip running `yarn`/`npm install` during build. ###### `NODE_VERSION` See [Setting Your Node.js Version](node-version). ###### `BUN_VERSION` See [Setting Your Bun Version](bun-version). ### Python 3 ###### `PYTHON_VERSION` See [Setting Your Python Version](python-version). ###### `POETRY_VERSION` See [Setting Your Poetry Version](poetry-version). ###### `UV_VERSION` See [Setting Your uv Version](uv-version). ### Rust ###### `RUSTUP_TOOLCHAIN` See [Specifying a Rust Toolchain](rust-toolchain). ## How to set environment variables See [Environment Variables and Secrets](configure-environment-variables). # Render Workflows > *Render Workflows are in limited early access.* > > During the early access period, the Workflows API and SDK might introduce breaking changes. *Render Workflows* provide managed execution of distributed tasks with rapid spin-up and automatic retries. Create workflows to manage ETL pipelines, AI agents, or any other job that benefits from widely distributed background execution. Workflows overview A workflow defines a collection of *tasks* (such as `process_docs` and `process_doc` above) that you can run from your applications. Each task runs in its own compute instance and can spin up _other_ tasks as *subtasks* to efficiently distribute work across hundreds or even thousands of instances. *Render automatically handles queuing, provisioning, and orchestration of tasks.* Task instances run alongside your other Render [service types](service-types), enabling fast and safe communication over your [private network](private-network). Task instances usually spin up in under one second, providing a more performant, comprehensive evolution of the [background worker](background-workers) model. ## Core capabilities | Feature | Description | |--------|--------| | *Automatic queuing and orchestration* | Render coordinates every phase of the task lifecycle automatically, from queuing to spin-up to deprovisioning. | | *Long-running execution* | Each task instance can run for up to 2 hours. A future update will further extend this limit. | | *Configurable retry logic* | Define [retry behavior](workflows-defining#retry-logic) for each task in the event of failure, with exponential backoff. | | *Outbound networking* | Task instances can initiate network connections over both the public internet and your [private network](private-network). Task instances cannot receive _incoming_ network connections. | | *Execution observability* | Track the progress and status of active and completed task runs in the Render Dashboard. | | *Unified SDK* | Install a single lightweight SDK both to register your tasks and to run them from your applications. > *The Workflows SDK is currently available only for Python.* SDKs for other languages are coming soon. | ### Early access limitations We'll address these limitations in future releases following early access: - Workflows currently only support Python for defining tasks. - SDKs for other languages are coming soon. - It is not yet possible to customize a task's associated compute specs. Currently, every task instance has 1 CPU and 2 GB of RAM. - Workflows do not provide built-in support for [running tasks](workflows-running) on a schedule. - To schedule tasks, you can create a [cron job](cronjobs) that runs your tasks on the desired schedule. - If a workflow belongs to a [network-isolated environment](projects#blocking-cross-environment-traffic), its task instances _cannot_ communicate with other services in that environment over its private network. - Workflows do not yet support running tasks on [HIPAA-compliant](hipaa-compliance) hosts. - To prevent accidental PHI exposure, it is not currently possible to create new workflows in a HIPAA-enabled workspace. ## How it works 1. You define workflow tasks as Python functions using the Workflows SDK (support for other languages is coming soon): ```python:main.py from render_sdk.workflows import task, start import asyncio # A basic task @task def calculate_square(a: int) -> int: return a * a # A task that runs two subtasks in parallel @task async def sum_squares(a: int, b: int) -> int: result1, result2 = await asyncio.gather( calculate_square(a), calculate_square(b) ) return result1 + result2 if __name__ == "__main__": start() # SDK entry point ``` 2. In the [Render Dashboard][dboard], you create a new workflow service and link your Python project repo. 3. Render pulls your repo and performs a build, registering your tasks automatically. 4. You can now run your registered tasks from application code using either the Workflows SDK or the Render API. ## Get started After your workspace receives early access, you're ready to [create your first workflow!](workflows-tutorial) ## FAQ ###### How do I receive access to Render Workflows? Request early access for your workspace at [render.com/workflows](workflows). ###### How do I get started with Render Workflows? After your workspace receives early access, get started with [Your First Workflow](workflows-tutorial). ###### Can I define workflow tasks in a language besides Python? *Not yet.* SDKs for languages besides Python are coming soon. ###### Can I run tasks from a language besides Python? *Yes.* You can run tasks by calling the Render API directly instead of using the Python SDK. For details, see [Run Workflow Tasks](workflows-running). ###### Can task instances receive incoming network connections? *No.* Similar to [background workers](background-workers), your task instances must initiate any required network connections themselves. # Your First Workflow > *[Render Workflows](workflows) are in limited early access.* > > During the early access period, the Workflows API and SDK might introduce breaking changes. Welcome to Render Workflows! After early access is enabled for your workspace, follow these steps to register your first task and run it. ## 1. Clone the Python template > *Workflows currently only support Python for defining tasks.* > > SDKs for other languages are coming soon. As part of creating a workflow, you'll link a GitHub/GitLab/Bitbucket repo that contains your task definitions. To get started quickly, copy our basic [*Python template*](https://github.com/render-examples/workflows-template-python) on GitHub. On the template page, click *Use this template > Create a new repository* to create your own repo with the template's contents. ### The anatomy of a workflow Let's look at an excerpt from the template's `main.py` file: ```python:main.py from render_sdk.workflows import task, start # Minimal task definition @task def calculate_square(a: int) -> int: return a * a if __name__ == "__main__": start() # Workflow entry point ``` This excerpt shows the bare minimum syntax for defining a workflow: - You define a task by applying the `@task` decorator to any function. - You call the `start` function on startup to initiate task registration and execution. Both `@task` and `start` are imported from the [Workflows SDK for Python](workflows-sdk-python), which is the template's only dependency. ## 2. Create a workflow service 1. In the [Render Dashboard][dboard], click **New > Workflow**: [img] The workflow creation form appears. > **Don't see the Workflow service type?** > > Make sure you're in a workspace that has received early access to Render Workflows. Request early access at [render.com/workflows](workflows). 2. Link the GitHub/GitLab/Bitbucket repo with your workflow's task definitions. 3. Complete the remainder of the creation form. See guidance for important fields: | Field | Description | |--------|--------| | **Language** | Currently, this is always **Python 3**. Support for other languages is coming soon. | | **Region** | Your workflow's tasks will run on instances in the specified region. This determines which of your _other_ Render services they can reach over your [private network](private-network). | | **Build Command** | If you're using the Python example template, this is the following: `pip install -r requirements.txt` Otherwise, provide the command that Render should use to build your workflow. | | **Start Command** | If you're using the Python example template, this is the following: `python main.py` Otherwise, provide the command that Render should use to start your workflow. | 4. Click **Deploy Workflow**. Render kicks off your workflow's first build, which includes registering your tasks. That's it! After the build completes, your tasks are officially registered. You can view them from your workflow's **Tasks** page in the [Render Dashboard][dboard]: [img] ## 3. Execute a task Now that we have a registered task, let's run it! The quickest way to trigger our first run is from the Render Dashboard: 1. From your workflow's **Tasks** page, click a task to open its **Runs** page. 2. Click **Run Task** in the top-right corner of the page: [img] A dialog appears for providing the task's input arguments: [img] 3. Provide the task's input arguments as a JSON array (e.g., `[5]` for a task that takes a single integer argument, or `[]` for a task that takes zero arguments). 4. Click *Start task*. Your new task run appears at the top of the *Runs* table. ## Next steps Congratulations! You've registered and run your first workflow task. Now it's time to start designing your own tasks and running them from application code: - [Define advanced tasks](workflows-defining) with retries, subtasks, and more. - [Run your registered tasks](workflows-running) from application code. - [Test task runs locally](workflows-local-development) for faster development. # Defining Workflow Tasks > *[Render Workflows](workflows) are in limited early access.* > > During the early access period, the Workflows API and Python SDK might introduce breaking changes. After you [create your first workflow](workflows-tutorial), you can start defining your own tasks. This article describes supported syntax and configuration options. ## Basic example > *The Workflows SDK is currently available only for Python.* > > SDKs for other languages are coming soon. It is not currently possible to define tasks in languages besides Python. **Python** Define a task in your workflow service by applying the `@task` decorator to any function, like so: ```python:main.py from render_sdk.workflows import task, start @task #highlight-line def calculate_square(a: int) -> int: return a * a if __name__ == "__main__": start() ``` This defines a basic task named `calculate_square` that takes a single integer argument and returns its square. For all supported `@task` options, see the [Python SDK reference](workflows-sdk-python#the-task-decorator). ## Organizing tasks You can define your workflow's tasks across any number of files in your project repo: ```python:main.py from render_sdk.workflows import ( task, start, ) import math_tasks # Import all tasks @task def capitalize(s: str) -> str: return s.upper() if __name__ == "__main__": start() # SDK entry point ``` ```python:math_tasks.py from render_sdk.workflows import task @task def calculate_square(a: int) -> int: return a * a @task def add(a: int, b: int) -> int: return a + b ``` In the above example, task definitions are distributed across two files: `main.py` and `math_tasks.py`. For tasks to register successfully, your workflow's entry point (commonly `main.py`) must import your other files that contain task definitions. ## Task arguments **A task's function can define any number of arguments.** This example task takes three arguments of different types: ```python @task def my_task(arg1: int, arg2: str, arg3: bool) -> int: # ... ``` **Task arguments are positional.** Whenever you [run a task](workflows-running), you provide its arguments in a JSON array in the same order as they appear in the task's function signature: ```python started_run = await client.workflows.run_task( task_identifier="my-workflow/my-task", input_data=[1, "hello", True] ) ``` **Argument and return types must be JSON-serializable.** Your applications provide task arguments in a JSON array via the Render API, and a task's result is also returned as JSON. **All task arguments are required.** If you attempt to run a task with missing arguments (or _too many_ arguments), the task will fail. Argument values _can_ be null, as long as your task logic supports this. ## Retry logic Your tasks can automatically **retry** if a run fails. A task run is considered to have failed if its function raises an exception instead of returning a value. Retries are useful for tasks that might be affected by temporary failures, such as network errors or timeouts. ### Default retry behavior By default, tasks use the following retry logic: - Retry up to 3 times (i.e., 4 total attempts) - Wait 1 second before attempting the first retry - Double the wait time after each retry (i.e., one second, two seconds, four seconds) ### Customizing retries You can customize retry behavior on a per-task basis: **Python** Provide retry settings to the `@task` decorator with the following syntax: ```python{1,5-11} from render_sdk.workflows import task, Options, Retry import random @task( options=Options( retry=Retry( max_retries=3, # Retry up to 3 times (i.e., 4 total attempts) wait_duration_ms=1000, # Set a base retry delay of 1 second factor=1.5 # Increase delay by 50% after each retry (exp. backoff) ) ) ) def flip_coin() -> str: if random.random() < 0.5: raise Exception("Flipped tails! Retrying.") return "Flipped heads!" ``` This contrived example defines a task named `flip_coin` that raises an exception when it "flips tails", causing the run to fail and retry according to its settings. ## Running subtasks A task can straightforwardly run other tasks that are defined in the same workflow. These **subtasks** each run in their own instance, just like their parent. Workflows overview > **When should I run a subtask?** > > Subtasks are most helpful when different parts of a larger job benefit from long-running, independent compute. > > For simple jobs (such as the very basic example below), it's more efficient to perform the entirety of your logic in a single task. **Python** The simple `sum_squares` task below runs two `calculate_square` subtasks: ```python:math_tasks.py from render_sdk.workflows import task import asyncio # A task that runs two subtasks @task async def sum_squares(a: int, b: int) -> int: # Must be async to await subtasks result1, result2 = await asyncio.gather( calculate_square(a), calculate_square(b) ) return result1 + result2 @task def calculate_square(a: int) -> int: return a * a ``` **When running subtasks:** - In most cases, the parent task should be defined as `async`. - Otherwise, it can't `await` the results of its subtasks. - You run a subtask by calling the corresponding function (e.g., `calculate_square` above). - _However_, this call doesn't return the function's defined return value! - Instead, this kicks off a task run and returns a special `TaskInstance` object. - As shown, you can `await` this object to obtain the subtask's _actual_ return value. - Your task _can_ call functions that are _not_ marked as tasks. These functions run and return as normal (they do not spin up their own task instances). > **A task _can_ run another task defined in a _different_ workflow. However:** > > - This requires instead using the Workflows SDK or Render API, as described in [Running Workflow Tasks](workflows-running). > - This is not tracked as a task/subtask relationship when visualizing task execution in the [Render Dashboard][dboard]. ### Parallelizing subtasks When running subtasks, it's usually helpful to run multiple of them in parallel, such as to chunk a large workload into smaller independent pieces. Common examples include processing batches of images or analyzing different sections of a large document. **Python** Use `asyncio.gather` to run multiple subtasks in parallel. In the example below, the `process_photo_upload` task runs a separate `process_image` subtask for each element in its `image_urls` argument: ```python from render_sdk.workflows import task import asyncio @task async def process_photo_upload(image_urls: list[str]) -> dict: # Process all images in parallel by running a subtask for each results = await asyncio.gather( #highlight-line *[process_image(url) for url in image_urls] #highlight-line ) #highlight-line num_successful = sum(1 for r in results if r["success"]) num_failed = len(results) - num_successful return { "total": len(image_urls), "processed": num_successful, "failed": num_failed, "results": results } @task def process_image(image_url: str) -> dict: # Image processing logic goes here return { "url": image_url, "thumbnail_url": f"{image_url}_thumb.jpg", "success": True } ``` **If you don't use `asyncio.gather` or a similar function, subtasks run serially.** For example: ```python{3-5} @task async def sum_squares_slower(a: int, b: int) -> int: # ⚠️ Not parallel! result1 = await calculate_square(a) result2 = await calculate_square(b) # Runs after first subtask completes return result1 + result2 @task def calculate_square(a: int) -> int: return a * a ``` Serial execution _is_ helpful when running a chain of subtasks that depend on each other. However, it dramatically slows execution for subtasks that are completely independent. Parallelize wherever your use case allows. # Running Workflow Tasks > *[Render Workflows](workflows) are in limited early access.* > > During the early access period, the Workflows API and Python SDK might introduce breaking changes. After you [create a workflow](workflows-tutorial) and register tasks, you can start triggering task runs from your applications (such as other Render services). You can also [manually trigger runs](#running-manually) in the Render Dashboard and CLI to help with testing and debugging. ## First: Create an API key *Triggering task runs from your code requires a Render API key.* Create an API key with [these steps](api#1-create-an-api-key), then return here. ## Running with the Workflows SDK > *The Workflows SDK is currently available only for Python.* > > SDKs for other languages are coming soon. To execute tasks from other languages, [use the Render API](#running-with-the-render-api). Follow these steps to execute workflow tasks from application code using the Workflows SDK. ### 1. Install the SDK **Python** ```shell pip install render_sdk ``` Make sure to add `render_sdk` to your application's `requirements.txt` file (or equivalent). ### 2. Set your API key In your application's environment, set the `RENDER_API_KEY` environment variable to your [API key](#first-create-an-api-key): ```bash export RENDER_API_KEY=rnd_abc123… ``` The SDK client automatically detects and uses the value of this environment variable. Alternatively, you can provide your API key explicitly when [initializing the client](workflows-sdk-python#client). ### 3. Initialize the client and trigger a run **Python** The following code demonstrates initializing the SDK client, triggering a task run, and waiting for the run to complete. See below for more details. ```python:basic_task_runner.py from render_sdk.client import Client import asyncio async def run_task(): # Initialize the client client = Client() # Kick off a task run started_run = await client.workflows.run_task( task_identifier="my-workflow/calculate-square", input_data=[2] ) print(f"Task run started: {started_run.id}") print(f"Initial status: {started_run.status}") # Wait for run to complete finished_run = await started_run print(f"Task run completed: {finished_run.id}") print(f"Final status: {finished_run.status}") if __name__ == "__main__": asyncio.run(run_task()) ``` You trigger a task run by calling the client's [`workflows.run_task`](workflows-sdk-python#workflows-run-task) method. This method takes the following arguments: | Argument | Description | |--------|--------| | `task_identifier` | The **slug** indicating the task to run, available from your task's page in the [Render Dashboard][dboard]: [img] Every task slug has the following format: `{workflow-slug}/{task-name}` For example: `my-workflow/calculate-square` | | `input_data` | A list containing the task's input arguments. Each element maps to the task's corresponding positional argument. The `calculate-square` task in the example above takes a single integer argument. For tasks that take zero arguments, provide an empty list, `[]`. | The `workflows.run_task` method returns an [`AwaitableTaskRun`](workflows-sdk-python#the-awaitabletaskrun-class) object as soon as the run is created. This object provides the run's `id` and initial `status`, which are both available immediately. You can `await` this object to wait for the run to complete, at which point all other properties are available. For full options and details, see the [Python SDK reference](workflows-sdk-python#workflows-run-task). ## Running with the Render API The [Render API](api) provides an endpoint for triggering task runs, along with a variety of endpoints for retrieving workflow and task run details. The Workflows SDK uses the Render API behind the scenes, and you can also use it directly from your own code. Start a task run by sending a POST request to the [Run a Task](https://api-docs.render.com/reference/createtask/) endpoint. The JSON body for this request includes two properties: ```json { "task": "my-workflow/calculate-square", "input": [2] } ``` | Property | Description | |--------|--------| | `task` | **Required.** An identifier specifying the task to run. You can provide either of two identifiers, both of which are available from your task's page in the [Render Dashboard][dboard]: [img] - The task's **slug** - This has the format `{workflow-slug}/{task-name}` (for example, `my-workflow/calculate-square`) - The task's **ID** - This has the format `tsk-abc123...` | | `input` | **Required.** A list containing values for the task's [input arguments.](workflows-defining#task-arguments) For a task that takes zero arguments, provide an empty list, `[]`. | ## Running manually You can manually trigger task runs directly from the Render Dashboard and CLI. This is handy for testing and debugging new tasks. **Dashboard** #### Running tasks manually in the Render Dashboard 1. From your workflow's **Tasks** page in the [Render Dashboard][dboard], click a task to open its **Runs** page. 2. Click **Run Task** in the top-right corner of the page: [img] A dialog appears for providing the task's input arguments: [img] 3. Provide the task's input arguments as a JSON array. Each array element maps to the task's corresponding positional argument. For example, you can provide `[5]` for a task that takes a single integer argument, or `[]` for a task that takes zero arguments. You can click **Format** and **Validate** to cleanly structure your input and confirm that it's valid JSON. 4. Click **Start task**. Your new task run appears at the top of the **Runs** table. **CLI** #### Running manually with the Render CLI 1. Make sure your development machine has version 2.4.2 or later of the Render CLI: ```shell{outputLines:2} render --version render version 2.4.2 ``` If it doesn't, [install the latest version](cli#setup). 2. Run the following command: ```shell render ea tasks list ``` The CLI opens an interactive menu of all workflow tasks in your workspace: [img] 3. Select a task and press **Enter**, then select the `run` command. The CLI prompts you to provide the task's input arguments as a JSON array: [img] 4. Provide your desired arguments (or `[]` for a task that takes zero arguments) and press **Enter**. The CLI kicks off your task with a request to the Render API and begins tailing its logs. You can remain in this view to view live logs from your task run. 5. Press **Esc** to navigate back up to the list of commands for your task. This time select the `runs` command. The CLI opens an interactive menu of the task's runs: [img] 6. Select a run and press **Enter**, then select the `results` command. The CLI opens a view of the run's results: [img] # Local Dev with Render Workflows > *[Render Workflows](workflows) are in limited early access.* > > During the early access period, the Workflows API and Python SDK might introduce breaking changes. You can run workflow tasks on your local machine to iterate on them quickly. The Render CLI supports spinning up a *local task server* that simulates the entire task execution lifecycle. You can trigger task runs from your application code, or from the CLI itself. As you iterate on your task definitions, the local task server picks up changes automatically. It also retains in-memory logs and results for each run (these are lost on server shutdown). ## Prerequisites *Your development machine must have:* - The Render CLI version 2.4.2 or later - [Install the Render CLI](cli#setup) - A workflow project repo that defines and registers tasks - [Create your first workflow](workflows-tutorial) ## Starting the task server From your workflow project repo, run the following command: ```shell render ea tasks dev -- ``` Replace `` with the command to start your workflow. This is commonly `python main.py`: ```shell render ea tasks dev -- python main.py ``` **Command not found?** - Make sure you've [installed the Render CLI](cli#setup). - Run `render --version` to confirm you're using version 2.4.2 or later. Your local task server spins up and starts listening on port `8120`. You can specify a different port with the `--port` option: ```shell render ea tasks dev --port 8121 -- python main.py ``` ## Triggering local task runs ### In application code > **This section assumes you have an existing app that runs workflow tasks.** > > If you don't, first get set up with [Running Workflow Tasks](workflows-running). You can configure your locally running apps to point to your local task server when triggering task runs. How you do this depends on whether your app uses the Workflows SDK or the Render API: **Workflows SDK (Python)** If your app uses the Workflows SDK to trigger task runs, set the following environment variable(s) to run tasks against your local task server: ```bash:.env # Always set this: RENDER_USE_LOCAL_DEV=true # Also set this if you're using a non-default URL/port: # RENDER_LOCAL_DEV_URL=http://localhost:8121 ``` **Render API** If your app uses the Render API directly to run tasks, swap out the base URL you use for task-related endpoints with your local task server's URL. Here's a Node.js example: ```js const TASKS_BASE_URL = process.env.RENDER_TASKS_URL || 'https://api.render.com' ``` In this example, you would set the `RENDER_TASKS_URL` environment variable to your local task server URL (e.g., `http://localhost:8120`) to use it for development. Note that the local task server _only_ simulates task-related endpoints. Other Render API endpoints are not supported. ### In the Render CLI 1. With your [local task server running](#starting-the-task-server), run the following command: ```shell render ea tasks list --local ``` **Don't forget the `--local` flag!** Otherwise, the CLI will list tasks from your deployed workflow services. The CLI opens an interactive menu of your locally registered tasks: [img] 2. Select a task and press **Enter**, then select the `run` command. The CLI prompts you to provide the task's input arguments as a JSON array: [img] If you task takes zero arguments, provide an empty list, `[]`. 3. Provide your desired arguments and press **Enter**. The CLI kicks off your task with a request to your local task server and begins tailing its logs. You can remain in this view to view live logs from your task run. 4. Press **Esc** to navigate back up to the list of commands for your task. This time select the `runs` command. The CLI opens an interactive menu of the task's local runs: [img] 5. Select a run and press **Enter**, then select the `results` command. The CLI opens a view of the run's results: [img] ## Local-only considerations - Logs and results for local task runs are stored in memory by the local task server. - This data is lost when the server shuts down. - This data is retained indefinitely as long as the server is running. This can lead to high memory usage over time. - If you trigger a high volume of local task runs, we recommend periodically restarting your local task server to free up memory. - Identifiers for local tasks and runs are randomly generated UUIDs. - The identifier for a given task differs each time you run the local task server. - Local identifiers do not correspond to any values in your deployed workflow services. # Workflows SDK for Python > *[Render Workflows](workflows) are in limited early access.* > > During the early access period, the Workflows API and SDK might introduce breaking changes. Render provides a Python SDK that supports both registering workflow tasks and executing those tasks from application code. ## Install ```shell pip install render_sdk ``` Make sure to add `render_sdk` as a dependency to your application's `requirements.txt` file (or equivalent). ## The `@task` decorator You apply the `@task` decorator to a Python function to register it as a workflow task. For details, see [Defining Workflow Tasks](workflows-defining) ### Minimal example ```python from render_sdk.workflows import task @task def calculate_square(a: int) -> int: return a * a ``` ### Example with all arguments ```python from render_sdk.workflows.task import task, Options, Retry @task( name="calc_square", # Give the task a custom name (defaults to function name) options=Options( retry=Retry( # Define default retry logic for the task max_retries=3, # Retry up to 3 times (i.e., 4 total attempts) wait_duration_ms=1000, # Set a base retry delay of 1 second factor=1.5 # Increase delay by 50% after each retry (exponential backoff) ) ) def calculate_square(a: int) -> int: return a * a ``` ### Argument reference **Top-level arguments** | Option | Description | |--------|--------| | `name` | A custom name for the task. This affects the task's **slug**, which you use to reference the task when [running it](workflows-running#3-initialize-the-client-and-trigger-a-run). If omitted, defaults to the name of the decorated function. | | `options` | Contains all other arguments for the task definition. Currently supports a single argument: `retry`. | **Retry arguments** | `max_retries` | The maximum number of retries to attempt for a given run of the task. The total number of attempts is up to `max_retries + 1` (the initial attempt plus all retries). | | `wait_duration_ms` | The base delay before attempting the first retry, in milliseconds. | | `factor` | The exponential backoff factor. After each retry, the previous delay is multiplied by this factor. For example, a factor of `1.5` increases the delay by 50% after each retry. | ## The `start` function The `start` function serves as the entry point for your workflow during both task registration and task execution. Your workflow definition must call this function as part of startup: ```python:main.py from render_sdk.workflows import task, start @task def calculate_square(a: int) -> int: return a * a if __name__ == "__main__": start() ``` This function takes no arguments. ## The `Client` class The `render_sdk.client.Client` class provides methods for [running registered tasks](workflows-running) from Python applications (such as a Render web service or cron job): ```python from render_sdk.client import Client client = Client() await client.workflows.run_task("calculate_square", [2]) ``` ### Constructor ###### `Client` Initializes a new `Client` instance. All arguments are optional. ```python from render_sdk.client import Client # Basic initialization client = Client() # Initialization with all arguments client = Client( token="rnd_abc123…", # API key base_url="localhost:8120" # Local task server URL ) ``` | Argument | Description | |--------|--------| | `token` | The [API key](api#1-create-an-api-key) to use for authentication. If omitted, the client automatically detects and uses the value of the `RENDER_API_KEY` environment variable. | | `base_url` | The base URL to use for task-related requests. Specify only for [local development](workflows-local-development). If omitted: - By default, the client uses the base URL of the Render API (`https://api.render.com`). - If the `RENDER_LOCAL_DEV_URL` environment variable is set, the client uses the value of this variable. - If the `RENDER_USE_LOCAL_DEV` environment variable is set to `true`, the client uses the local task server's default URL (`http://localhost:8120`). | If you don't provide an API key, the client will automatically detect and use the value of the `RENDER_API_KEY` environment variable (if set). ### Task methods All methods below are `async`. ###### `workflows.run_task` Runs the registered task with the specified identifier, passing the specified arguments. **On success:** Returns an [`AwaitableTaskRun`](#the-awaitabletaskrun-class) object representing the initial state of the task run. **Raises:** [`ClientError`](#clienterror), [`ServerError`](#servererror), [`TimeoutError`](#timeouterror) ```python # Execute the calculate_square task with an input of 2 started_task_run = await client.workflows.run_task("calculate_square", [2]) task_run_id = started_task_run.id # ID is available immediately task_run_status = started_task_run.status # Initial status is available immediately finished_task_run = await started_task_run # Other properties become available after the task run completes print(task_run.result) # Prints the task run's result (4) ``` | Argument | Description | |--------|--------| | `task_identifier` | **Required.** The **slug** indicating the task to run, available from your task's page in the [Render Dashboard][dboard]: [img] Always has the format `{workflow-slug}/{task-name}` (e.g., `my-workflow/calculate-square`). | | `input_data` | **Required.** A list containing values for the task's [input arguments.](workflows-defining#task-arguments) For a task that takes zero arguments, provide an empty list, `[]`. | ###### `workflows.list_task_runs` Lists task runs that match optional filters specified in the provided `ListTaskRunsParams` object. **On success:** Returns a list of `TaskRun` objects. **Raises:** [`ClientError`](#clienterror), [`ServerError`](#servererror), [`TimeoutError`](#timeouterror) ```python from render_sdk.client.types import ListTaskRunsParams params = ListTaskRunsParams( limit=10, # Return up to 10 runs cursor="cfQ74cE2sDI=", # Start from this cursor owners=["tea-d3jm7ai4d50c73fale60"] # Limit to these workspaces ) await client.workflows.list_task_runs(params) ``` ###### `workflows.get_task_run` Retrieves the details of the task run with the specified ID. **On success:** Returns a [`TaskRunDetails`](#the-taskrundetails-class) object. **Raises:** [`ClientError`](#clienterror), [`ServerError`](#servererror), [`TimeoutError`](#timeouterror) ```python await client.workflows.get_task_run("trn-abc123") ``` | Argument | Description | |--------|--------| | `task_run_id` | **Required.** The ID of the task run to retrieve. Has the format `trn-abc123...` | ###### `workflows.cancel_task_run` Cancels the task run with the specified ID. This raises a [`ClientError`](#clienterror) if the task run is not found, or if it isn't currently running. **On success:** Returns `None`. **Raises:** [`ClientError`](#clienterror), [`ServerError`](#servererror), [`TimeoutError`](#timeouterror) ```python await client.workflows.cancel_task_run("trn-abc123") ``` | Argument | Description | |--------|--------| | `task_run_id` | **Required.** The ID of the task run to cancel. Has the format `trn-abc123...` | ## The `AwaitableTaskRun` class Represents the initial state of a task run as returned by the [`workflows.run_task`](#workflows-run-task) method. You can `await` this object to wait for the task run to complete. On success, it returns a [`TaskRunDetails`](#the-taskrundetails-class) object: ```python started_task_run = await client.workflows.run_task("calculate_square", [2]) finished_task_run = await started_task_run ``` **If the task run fails,** this `await` raises a [`TaskRunError`](#taskrunerror) exception. ### Properties | Property | Description | |--------|--------| | `id` | The ID of the task run. Has the format `trn-abc123...` | | `status` | The initial status of the task run. This is usually `pending`. | ## The `TaskRunDetails` class Represents the current state of a task run. Obtained in one of the following ways: - `await`ing an [`AwaitableTaskRun`](#the-awaitabletaskrun-class) object returned by [`workflows.run_task`](#workflows-run-task): ```python started_task_run = await client.workflows.run_task("calculate_square", [2]) finished_task_run = await started_task_run ``` - Calling the [`workflows.get_task_run`](#workflows-get-task-run) method: ```python task_run_details = await client.workflows.get_task_run("trn-abc123") ``` ### Properties | Property | Description | |--------|--------| | `id` | The ID of the task run. Has the format `trn-abc123...` | | `task_id` | The ID of the run's associated task. Has the format `tsk-abc123...`. | | `input_` | A list containing the argument values that were passed to the task run. Note the trailing underscore (`_`) in this property name. | | `status` | The current status of the task run. One of the following: - `pending` - `running` - `completed` - `failed` - `canceled` | | `results` | The task's return value. Present only if `status` is `completed`. | | `parent_task_run_id` | The ID of the parent task run, if this task was called as a [subtask](workflows-defining#running-subtasks) by another task. For a root-level task, this value is `None`. | | `root_task_run_id` | The ID of the root task run in this run's execution chain. For a root-level task, this value matches the value of `id`. | | `retries` | The number of times the task run has retried. For runs that succeed without retries, this value is `0`. [Learn more about retries.](workflows-defining#retry-logic) | ## Exception types Exceptions raised by the SDK have one of the types listed below. `RenderError` is the parent class for all other exception types. ```python from render_sdk.client.errors import ( RenderError, # Parent class for other exceptions ClientError, ServerError, TimeoutError, TaskRunError ) ``` | Exception | Description | |--------|--------| | `RenderError` | The base class for all exceptions raised by the SDK. | | `ClientError` | Raised when a request to the Render API returns a 400-level error code. Common causes include: - Invalid API key - Invalid task identifier - Invalid task arguments - Invalid action (e.g., canceling a task run that is already completed) | | `ServerError` | Raised when a request to the Render API returns a 500-level error code. | | `TimeoutError` | Raised when a request to the Render API times out. | | `TaskRunError` | Raised when an `await`ed task run fails. | # Persistent Disks You can attach a *persistent disk* to a paid Render [web service](web-services), [private service](private-services), or [background worker](background-workers). This enables you to preserve local filesystem changes across deploys and restarts. > *By default, Render services have an [*ephemeral filesystem*](deploys#ephemeral-filesystem).* > > This means that without a persistent disk, any changes you make to a service's local files are _lost_ every time the service redeploys or restarts. Persistent disks are useful for services such as: - Infrastructure components ([Elasticsearch](deploy-elasticsearch), [Kafka](https://kafka.apache.org), [RabbitMQ](deploy-rabbitmq), etc.) - A blogging platform or CMS ([WordPress](deploy-wordpress), [Ghost](deploy-ghost), [Strapi](https://strapi.io), etc.) - Collaboration apps ([Mattermost](https://mattermost.com), [GitLab](https://gitlab.com), [Discourse](https://www.discourse.org), etc.) - Custom datastores ([MySQL](deploy-mysql), [MongoDB](https://www.mongodb.com), etc.) - Note that Render offers managed [Postgres](postgresql) (relational database) and [Key Value](key-value) instances. If one of these services suits your needs, we recommend using it instead of setting up your own with a persistent disk. Persistent disks use the same high-performance SSDs as Render [Postgres](postgresql) and [Key Value](key-value) instances. All disks are encrypted at rest, and so are their [automatic daily snapshots](#disk-snapshots). ## Setup > Before you attach a persistent disk, it's helpful to understand important [limitations and considerations](#disk-limitations-and-considerations). You create persistent disks from the [Render Dashboard][dboard]. You can do so during service creation (click *Advanced* at the bottom of the creation form), or any time _after_ creation from your service's *Disks* page: [img] 1. Set your disk's *mount path* (such as `/var/data`). - *Only filesystem changes under this path are preserved across deploys and restarts!* The rest of your service's filesystem remains ephemeral. 2. Choose a disk *size*. - You can increase your disk's size later, but you can't _decrease_ it. Pick the smallest value that currently works for your service. 3. Click *Add disk*. After you save, Render triggers a new deploy for your service. The disk becomes available as soon as the deploy is live. ## Monitoring usage View your disk's usage over time from your service's *Disks* page in the [Render Dashboard][dboard]: [img] ## Disk snapshots Render automatically creates a snapshot of your persistent disk once every 24 hours. If your disk experiences critical data loss or corruption, you can completely restore its state to any available snapshot. Snapshots are available for at least seven days after they're created. > *Important:* > > - If you restore a snapshot, all changes to your disk that occurred _after_ the snapshot are lost. > - Render doesn’t support restoring only a portion of a disk snapshot. > - Do not restore a snapshot of a disk that's used for a custom database instance. [See details.](#restoring-a-custom-database) Restore a snapshot from your service's Disks page in the [Render Dashboard][dboard]: [img] ### Restoring a custom database If you use a persistent disk specifically to back a custom database instance on Render (such as MySQL or MongoDB), *do not perform a disk restore for database recovery purposes.* If you do, your database might restore to a corrupted state. Instead, create regular backups of your database using a tool like [mysqldump](https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html) for MySQL or [mongodump](https://www.mongodb.com/docs/manual/core/backups/#back-up-with-mongodump) for MongoDB. Restore your database's state using one of these backups. ## Transferring files You can securely transfer files between your disk-backed service and your local machine using a tool like [SCP](#scp) or [Magic-Wormhole](#magic-wormhole). ### SCP After you [set up SSH](ssh) for your service, you can transfer files using [SCP](https://man.openbsd.org/scp). For example, if your `ssh` command looks like this: ```shell ssh YOUR_SERVICE@ssh.YOUR_REGION.render.com ``` Then your `scp` commands look like this: ```shell{outputLines:1,3-5,7} # Copying a file from your service to your local machine scp -s YOUR_SERVICE@ssh.YOUR_REGION.render.com:/path/to/remote/file /destination/path/for/local/file file 100% 5930KB 999.9KB/s 00:05 # Copying a file from your local machine to your service scp -s /path/to/local/file YOUR_SERVICE@ssh.YOUR_REGION.render.com:/destination/path/for/remote/file file 100% 5930KB 999.9KB/s 00:05 ``` > We recommend using SCP with the `-s` flag to use the more modern SFTP protocol. Future releases of SCP will default to using SFTP, and this flag will no longer be needed. ### Magic-Wormhole The [Magic-Wormhole library](https://magic-wormhole.readthedocs.io/en/latest/) enables you to transfer files to and from your disk-backed service without using SSH and SCP. 1. In the [Render Dashboard][dboard], go to your service's **Shell** page. 2. **If you have a Docker-image-backed service,** use the shell to install `magic-wormhole`: - Run `apt update && apt install magic-wormhole` or the equivalent for your environment. - The `magic-wormhole` library is pre-installed on all Render native runtimes. 3. Use the shell to transfer your file with the `wormhole` command: ```shell{outputLines:2-3} wormhole send /path/to/filename.txt Sending 10.5 MB file named 'filename.txt' Wormhole code is: 4-forever-regain ``` 4. Note the code that appears in the output from `wormhole`. Then, from any internet-connected machine, install [magic-wormhole](https://magic-wormhole.readthedocs.io/en/latest/welcome.html) and run `wormhole receive`, entering in the code when prompted. ## Disk limitations and considerations When attaching a persistent disk to your service, note the following: - *Only filesystem changes under your disk's mount path are preserved across deploys and restarts.* - The rest of your service's filesystem remains [ephemeral](deploys#ephemeral-filesystem). - A persistent disk is accessible by only a single service instance, and only at runtime. This means: - You can't access a service's disk from any other service. - You can't [scale](scaling) a service to multiple instances if it has a disk attached. - You can't access persistent disks during a service's [build command](deploys#build-command) or [pre-deploy command](deploys#pre-deploy-command) (these commands run on separate compute). - You can't access a service's disk from a [one-off job](one-off-jobs) you run for the service (one-off jobs run on separate compute). - Adding a disk to a service prevents [zero-downtime deploys](deploys#zero-downtime-deploys). This is because: - When you redeploy your service, Render stops the existing instance _before_ bringing up the new instance. - This instance swap takes a few seconds, during which _your service is unavailable_. - This is a necessary safeguard to prevent data corruption that can occur when different versions of an app read and write to the same disk simultaneously. - You can't add a disk to a [cron job](cronjobs) service. - As an alternative, you _can_ add a disk to a [background worker](background-workers), which is useful for processes that run continuously and don’t expose a port. - You can increase your disk's size later, but you can't _decrease_ it. Pick the smallest value that currently works for your service. - Increasing a disk's size does not cause downtime. The additional storage becomes available to your service within a few seconds. # Render Key Value *Render Key Value* provides low-latency in-memory storage that's ideal for shared caches and job queues. Key Value instances are compatible with virtually all clients that interact with Redis®\*. Paid Key Value instances include [disk-backed persistence](#data-persistence). ## Underlying libraries - Newly created Render Key Value instances run Valkey 8. *What is Valkey?* [Valkey](https://valkey.io/) is an open-source key-value store that began as a fork of Redis version 7.2.4. For most libraries and frameworks that connect to a Redis instance, Valkey is a drop-in replacement. Learn more in the [FAQ](valkey-faq). - Legacy Key Value instances (created before Feburary 12, 2025) run Redis 6. - Legacy instances no longer receive version updates, but they will continue to operate as usual. ## Quickstarts These Render quickstarts include steps for provisioning a Key Value instance: - [Deploy a Celery background worker](deploy-celery) - [Deploy Rails with Sidekiq](deploy-rails-sidekiq) - [Rails caching with Redis](rails-caching-redis) - [Connecting to Render Key Value with ioredis](connecting-to-redis-with-ioredis) ## Create your Key Value instance 1. Go to [dashboard.render.com/new/redis](https://dashboard.render.com/new/redis), or select *New > Key Value* in the Render Dashboard. This form appears: [img] 2. Provide a helpful *Name* for your instance. - You can change this value at any time. 3. Choose a *Region* to run your instance in. - Choose the same region as your services that will connect to the instance. This minimizes latency and enables communication over your [private network](private-network). 4. Optionally change the instance's *Maxmemory Policy*. - [See details below.](#maxmemory-policy) 5. Scroll down and select an *instance type*. This determines its available RAM and connection limit. > [Learn about Free instance type limitations.](free#free-key-value) [img] 6. Click *Create Key Value*. You're all set! Your new instance's status updates to *Available* in the Render Dashboard when it's ready to use. ## Connect to your Key Value instance Every Key Value instance has two different URLs for incoming connections: - An *internal URL* for connections from your other Render services running in the _same region_ - Connections using the internal URL are unauthenticated by default. You can optionally [require authentication for internal connections](#requiring-auth-for-internal-connections). - An *external URL* for connections from _everything else_ - Before you can use the external URL, you must first [enable external connections](#enabling-external-connections) for your Key Value instance. [diagram] *Use the internal URL wherever possible.* It minimizes latency by enabling communication over your [private network](private-network). Both URLs are available from the *Connect* menu in the top-right corner of your instance's page in the [Render Dashboard][dboard]: [img] Key Value instances use `redis://` and `rediss://` URL schemes. You can connect to your instance using any Redis-compatible client that supports these schemes. ### Internal connection examples > To connect with your internal URL, your Key Value instance and your connecting service must belong to the same workspace _and_ run in the same [region](regions). **ioredis (JS)** ```js import Redis from 'ioredis' // Connect to your Key Value instance using the REDIS_URL environment variable // The REDIS_URL is set to the internal connection URL e.g. redis://red-343245ndffg023:6379 const redis = new Redis(process.env.REDIS_URL) // Set and retrieve some values await redis.set('key', 'ioredis') const result = await redis.get('key') console.log(result) ``` **node-redis (JS)** ```js import { createClient } from 'redis' // Connect to your Key Value instance using the REDIS_URL environment variable // The REDIS_URL is set to the internal connection URL e.g. redis://red-343245ndffg023:6379 const client = createClient({ url: process.env.REDIS_URL }) await client.connect() // Set and retrieve some values await client.set('key', 'node redis') const value = await client.get('key') console.log(value) ``` **redis-py (Python)** ```python import os import redis # Connect to your Key Value instance using the REDIS_URL environment variable # The REDIS_URL is set to the internal connection URL e.g. redis://red-343245ndffg023:6379 r = redis.from_url(os.environ['REDIS_URL']) # Set and retrieve some values r.set('key', 'redis-py') print(r.get('key').decode()) ``` **redis-rb (Ruby)** ```ruby require "redis" # Connect to your internal Key Value instance using the REDIS_URL environment variable # The REDIS_URL is set to the internal connection URL e.g. redis://red-343245ndffg023:6379 redis = Redis.new(url: ENV["REDIS_URL"]) # Set and retrieve some values redis.set("key", "redis ruby!") puts redis.get("key") ``` **Sidekiq (Ruby)** ```ruby require "sidekiq" # Connect to your internal Key Value instance using the REDIS_URL environment variable # The REDIS_URL is set to the internal connection URL e.g. redis://red-343245ndffg023:6379 Sidekiq.configure_server do |config| config.redis = { url: ENV["REDIS_URL"] } end Sidekiq.configure_client do |config| config.redis = { url: ENV["REDIS_URL"] } end # Simple example from https://github.com/mperham/sidekiq/wiki/Getting-Started class HardJob include Sidekiq::Job def perform(name, count) # do something end end HardJob.perform_async("bob", 5) ``` ### Enabling external connections By default, newly created Key Value instances are _not_ reachable at their external URL. To keep your instance secure, you can grant external access to specific sets of IPs. In the [Render Dashboard][dboard], go to your Key Value instance's **Info** page and scroll down to the **Networking** section: [img] Here you can specify IP address blocks using [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_blocks). The example above grants access to two example blocks: one for an office network and another for a single development machine. > **These rules apply only to connections that use your Key Value instance's external URL.** > > Your Render services in the same region as your Key Value instance can always connect using your instance's [internal URL](#connect-to-your-key-value-instance). If you attempt to connect from a disallowed IP address, your client will display an error like the following: ``` AUTH failed: Client IP address is not in the allowlist. ``` ### Requiring auth for internal connections By default, Key Value instances do not require authentication for internal connections over your [private network](private-network). This is why the default internal connection URL doesn't include a username or password: ```sh # An unauthenticated internal URL (default) redis://red-abc123:6379 # An authenticated internal URL redis://USERNAME_HERE:PASSWORD_HERE@red-abc123:6379 ``` > [External connections](#enabling-external-connections) always require authentication. To enforce additional security or fulfill compliance requirements, your Key Value instance can require authentication for internal connections: 1. From your Key Value instance's **Info** page in the [Render Dashboard][dboard], scroll down to the **Connections** section and click **Enable Internal Authentication**: > **Any existing connections that use your unauthenticated URL will break!** > > Before enabling this feature, we strongly recommend migrating your existing connections to use an authenticated URL. [See below.](#migrating-unauthenticated-key-value-connections) [img] 2. A confirmation dialog appears. Review the advisories, then click **Enable Internal Authentication**. After you confirm, Render restarts your Key Value instance (it will be unavailable for a few seconds). After the restart: - Your Key Value instance now requires authentication for all connections. - In the Render Dashboard, your instance's internal connection URL now includes a username and password. #### Migrating unauthenticated Key Value connections Before you [require authentication](#requiring-auth-for-internal-connections) for internal Key Value connections, you can update all of your existing connections to use an authenticated URL. This will prevent those connections from breaking when you enable authentication. 1. Obtain your Key Value instance's _external_ connection URL from the **Connect** menu: [img] > **External URL isn't shown?** > > You first need to add at least one IP range to your instance's [access control list](#enabling-external-connections). To continue blocking all external connections, you can add the dummy IP range `0.0.0.0/32`. 2. Extract the password from your external connection URL. The password starts just after the colon (`:`) and ends just before the "at" symbol (`@`): ``` rediss://user:PASSWORD_HERE@red-abc123:6379 ``` 3. For each service that connects to your Key Value instance with the internal URL, modify its connection string as shown below: ```sh # Before redis://red-abc123:6379 # After redis://default:PASSWORD_HERE@red-abc123:6379 ``` Specifically, you add the username `default` and the password you extracted in the previous step. Every Key Value instance has a user named `default`, and this user can accept _any_ password until auth is required. 4. Redeploy your modified services (unless they automatically redeployed when you updated their configuration). 5. You can now proceed through the steps to [require authentication](#requiring-auth-for-internal-connections) for internal connections. After your Key Value instance restarts with authentication enabled, the `default` user now _requires_ the exact password you provided. Because you've updated your connections accordingly, they remain functional. ### Connecting with CLI tools The [`redis-cli`](https://redis.io/topics/rediscli) is a useful administrative tool for exploring and manipulating data on your Redis instance. There are 2 ways you can use `redis-cli` with your Redis instance: - If you have a running non-Docker service, `redis-cli` will be available as part of the environment and is accessible from the service's Shell page. The service must be in the same region as the Redis instance. You can also [SSH into that service](ssh) and run `redis-cli` from there. - You can run `redis-cli` locally on your machine. You first need to [install](https://redis.io/docs/getting-started/installation/) `redis-cli` onto your machine. A copy and pastable `redis-cli` command is available in the `External Access` section of your Redis settings. Note that you first need to [enable external connections](#enabling-external-connections) for your Redis instance. > External connections are TLS secured. The Redis CLI command provided will include the `--tls` flag. After you connect, you can set and get keys using various commands: ``` oregon-redis.render.com:6379> set "render_is_cool" true OK oregon-redis.render.com:6379> get "render_is_cool" "true" oregon-redis.render.com:6379> KEYS r* 1) "render_is_cool" ``` ## Configure your Key Value instance ### Maxmemory policy Your Key Value instance's **maxmemory policy** determines which data it evicts to free space when it reaches its memory limit. You select a policy on instance creation and can change it later. - **For caching use cases,** we recommend using `allkeys-lru`. - **For job queues,** we recommend using `noeviction` to ensure that queued jobs are not lost. - **For other use cases,** select a policy from the table below based on your requirements. You can select any of the following policies: | Option | Description | Can memory fill up? | |--------|--------|--------| | `allkeys-lru` | Evict any key using approximated Least Recently Used (LRU). | No | | `noeviction` | Don't evict data. Instead, return an error on write operations whenever the instance is out of memory. | Yes | | `volatile-lru` | Evict using approximated LRU, only keys with an expire set. | Yes | | `volatile-lfu` | Evict using approximated Least Frequently Used (LFU), only keys with an expire set. | Yes | | `allkeys-lfu` | Evict any key using approximated LFU. | No | | `volatile-random` | Remove a random key having an expire set. | Yes | | `allkeys-random` | Remove a random key, any key. | No | | `volatile-ttl` | Remove the key with the nearest expire time (minor TTL) | Yes | ### Changing instance types You can upgrade your Key Value instance to a larger instance type with more RAM and a higher connection limit. > **Note the following before you upgrade:** > > - It is not currently possible to downgrade a Key Value instance. > - Your Key Value instance will be unavailable for a minute or two during the upgrade. > - If you upgrade a Free Key Value instance, all of its data will be lost. > - This is because Free Key Value instances don't persist data to disk. 1. In the [Render Dashboard][dboard], open your instance's **Info** page and scroll down to the **Key Value Instance** section. 2. Under **Instance Type**, click **Update**. 3. Select a new instance type and click **Save Changes**. > **Need an instance with more than 10 GB of RAM?** > > Please reach out to our support team in the [Render Dashboard][dboard]. ### Blueprint configuration As with your other services, you can manage your Key Value instances with [Blueprints](infrastructure-as-code), Render's infrastructure-as-code model. For details and examples, see the [Blueprint YAML Reference](blueprint-spec#render-key-value). ## Data persistence Paid Key Value instances on Render write their state to disk once per second via the configuration `appendfsync everysec`. If a paid instance experiences an interruption (or if you [upgrade your instance type](#changing-instance-types)), you might lose up to one second of writes. [Free Key Value instances](free#free-key-value) do _not_ persist data to disk. ## Metrics Metrics for memory usage, CPU load, and active connections are available from your Key Value instance's Metrics page in the Render Dashboard: [img] For details, see [Service Metrics](service-metrics#available-metrics). # FAQ: Valkey on Render Render has adopted *Valkey* in place of Redis®\* for all newly created instances of [Render Key Value](key-value). *Existing Redis instances continue operating as usual.* [See details below.](#what-will-happen-to-my-existing-redis-instances) ## FAQ ### What is Valkey? [Valkey](https://valkey.io/) is an open-source key-value store that began as a fork of Redis version 7.2.4. For most applications and frameworks that connect to a Redis instance, Valkey is a drop-in replacement. Valkey and Redis are completely independent projects that might diverge further over time. ### Why has Render adopted Valkey for new Key Value instances? In 2024, the Redis project moved from the BSD 3-clause license to the [Redis Source Available License](https://redis.io/blog/redis-adopts-dual-source-available-licensing/). This change imposed restrictions on offering Redis as a managed service (and also led to the creation of the Valkey project). Valkey immediately gained strong community backing, and it has established itself as the leading open-source Redis-compatible datastore. After comparing it against other available options, we believe Valkey provides the best combination of capabilities, compatibility, and community support for Render customers. ### How do I create a Valkey instance? When you create a new Render Key Value instance, that instance automatically runs Valkey. The creation process is largely identical to the previous process for creating Redis instances. In a `render.yaml` [Blueprint file](infrastructure-as-code), the values `keyvalue` and `redis` are equivalent for a service's [`type`](blueprint-spec#type) field: when used to create a new service, both values provision a Render Key Value instance that runs Valkey. ### What will happen to my existing Redis instances? Existing Redis instances will continue operating as usual, with the following platform changes: - *Redis instances will no longer receive version updates.* They will remain on version 6.2.14 indefinitely. - *You cannot create _new_ Redis instances, with one exception:* [Preview environments](preview-environments) can create Redis instances as needed to accurately replicate your existing instances. ### Do I need to migrate my existing Redis instances to Valkey? *No, but you're welcome to.* For the vast majority of Render users, Valkey is a drop-in replacement for Redis. ### Are there any changes to pricing? *No.* Render Key Value instance types remain unchanged in both specs and pricing. The only difference is that new instances run Valkey instead of Redis. # Create and Connect to Render Postgres > *Migrating from Heroku?* > > We're previewing an upcoming tool for low-downtime PostgreSQL migration and are looking for organizations with a large (50+ GB) Heroku Postgres database to migrate. We'll work with selected organizations to help ensure a successful, speedy migration. > > [Apply for the preview.](https://docs.google.com/forms/d/e/1FAIpQLSfH9R6b-tAC9Cm-6Y6dIJxtWF04XV7DCVMwW8aWYUStXBa1Kg/viewform) Render Postgres databases provide fully managed, scalable storage of relational data. All paid Render Postgres databases provide [point-in-time recovery](postgresql-backups) and on-demand logical exports. Larger instances support [read replicas](postgresql-read-replicas) and [high availability](postgresql-high-availability) for improved performance and reliability. ## Quickstarts Here are a few Render quickstarts that include a Render Postgres database as part of their application stack: - [Django](deploy-django) - [Rails](deploy-rails-8) - [Phoenix](deploy-phoenix-distillery) ## Create your database 1. Go to [dashboard.render.com/new/database](https://dashboard.render.com/new/database), or click *+ New > Postgres* in the Render Dashboard. This form appears: [img] 2. Provide a helpful *Name* for your database. - You can change this value at any time. 3. Optionally fill in the *Database* and/or *User* fields if you want to set your PostgreSQL `dbname` and/or username. - Render generates values for either of these that you don't specify. - You _can't_ change these values after creating your database. 4. Choose a *Region* to run your database in. - Choose the same region as your services that will connect to the database. This minimizes latency and enables communication over your [private network](private-network). 5. Optionally change the *PostgreSQL Version* if you want to use an older version. - Major versions through are available for all new instances. - Versions 11 and 12 are available for workspaces that have at least one _existing_ database on the corresponding version. 6. Scroll down and select an *instance type* for your database. This determines its available RAM and CPU. > [Learn about limitations of the Free instance type.](free#free-postgres) [img] You can [change your instance type](#changing-your-instance-type) later. 7. Scroll down and set your database's initial storage, in GB. - You can specify 1 GB or any multiple of 5 GB. - You can increase your storage later, but you can't decrease it. 8. Optionally enable *Storage Autoscaling*. - Whenever your database is 90% full, Render automatically increases its storage by 50%, rounded up to the nearest multiple of 5 GB. You can't reduce storage after increasing it. [Learn more.](#storage-autoscaling) 9. Click *Create Database*. You're all set! Your new database's status updates to *Available* in the Render Dashboard when it's ready to use. ## Connect to your database Every Render Postgres database has two different URLs for incoming connections: - An *internal URL* for connections from your other Render services hosted in the _same region_ - An *external URL* for connections from _everything else_ [diagram] *Use the internal URL wherever possible.* It minimizes query latency by enabling communication over your [private network](private-network). Both URLs are available from the *Connect* menu in the top-right corner of your database's page in the [Render Dashboard][dboard]: [img] How you connect to your database depends on your code: some frameworks expect a single connection string or URL in an environment variable, while others need multiple connection parameters in a configuration file. See [Quickstarts](#quickstarts) for examples. At a minimum, your app needs to know your database's hostname, port, username, password, and database name (such as `mydb` in the [official PostgreSQL tutorial](https://www.postgresql.org/docs/current/tutorial-createdb.html)). > Render Postgres uses the default PostgreSQL port `5432`. You can usually leave this port unspecified. ### Internal connections > To use the internal URL, your connecting service and your database must belong to the same account and [region](regions). Wherever possible, connect to your database using its internal URL. Internal connection details are available on your database's *Info* page in the [Render Dashboard][dboard]: [img] You can view individual details, along with the assembled internal URL (of the format `postgresql://USER:PASSWORD@INTERNAL_HOST:PORT/DATABASE`). Use whichever format your framework expects for database credentials. ### External connections > *External URL connections are slower because they traverse the public internet.* > > To minimize latency, use your database's [internal URL](#internal-connections) when connecting from a Render service running in the same region. Tools and systems outside of Render can connect to your database via its external URL, available from its *Info* page in the [Render Dashboard][dboard]: [img] Most database clients understand the external URL, which has the format `postgresql://USER:PASSWORD@EXTERNAL_HOST:PORT/DATABASE`. You can also run the provided *PSQL Command* directly in your terminal to start a psql session. > If you encounter an SSL error, confirm that your PostgreSQL client supports TLS version 1.2 or higher, and that it supports any of the following cipher suites: > > > *Click to show* > > - `TLS_AES_128_GCM_SHA256` > - `TLS_AES_256_GCM_SHA384` > - `TLS_CHACHA20_POLY1305_SHA256` > - `TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256` > - `TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256` > - `TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384` ### Restricting external access By default, your Render Postgres instance is accessible from any IP address (if the connection uses valid credentials). You can modify this default behavior by restricting access to a set of IPs or even disabling external access entirely. In the [Render Dashboard][dboard], go to your database's *Info* page and scroll down to the *Networking* section: [img] You can specify IP address blocks using [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_blocks). The default block is `0.0.0.0/0`, which allows access from any IP address. > *These rules apply only to connections that use your database's [*external URL*](#external-connections).* > > Your Render services in the same region as your database can always connect using your database's [internal URL](#internal-connections). ### Connection limits Your database's maximum number of simultaneous connections depends on its instance type's total memory (RAM): | Memory | Max Connections | |--------|--------| | < 8GB | 100 connections | | 8 GB <= memory < 16 GB | 200 connections | | 16 GB <= memory < 32 GB | 400 connections | | \>= 32GB | 500 connections | If you're approaching your connection limit, consider [upgrading your instance type](#changing-your-instance-type) or implementing [connection pooling](postgresql-connection-pooling). > *Databases on a [*legacy instance type*](postgresql-legacy-instance-types) support fewer connections:* > > > *View legacy instance connection limits* > > > | Memory | Max Connections | > |--------|--------| > | \<= 6GB | 97 connections | > | Between 6GB and 10GB | 197 connections | > | \>= 10GB | 397 connections | > > > > You can move your database to a flexible plan by [changing its instance type](#changing-your-instance-type). ## Adding storage You set your database's initial storage during [creation](#create-your-database). Any time after that, you can increase your database's storage to any higher multiple of 5 GB, up to 16 TB. > *Need more than 16 TB of storage?* > > Please [contact support](https://dashboard.render.com?contact-support) in the Render Dashboard. You can increase storage [automatically](#storage-autoscaling) or [manually](#increasing-storage-manually). Note the following: - After you increase a database's storage, you can't increase it again for 12 hours. - It is not possible to reduce a database's storage. - Databases on a [legacy instance type](postgresql-legacy-instance-types) have a fixed storage capacity. - You can move your database to a flexible plan by [changing its instance type](#changing-your-instance-type). ### Storage autoscaling You can automatically add storage to your database whenever it's running low. With *storage autoscaling* enabled, Render detects when your database is 90% full and permanently increases its storage by 50%, rounded up to the nearest multiple of 5 GB. Here are some example increases: | Original Storage | New Storage | |--------|--------| | 1 GB | 5 GB | | 10 GB | 15 GB | Enable storage autoscaling with any of the following methods: **Dashboard** #### Dashboard 1. From your database's *Info* page in the [Render Dashboard][dboard], scroll down to the *PostgreSQL Instance* section and click *Update*: [img] 2. Scroll down to the *Enable Storage Autoscaling* field and toggle the switch. 3. Click *Save Changes*. That's it! Render will automatically add storage to your database whenever it's 90% full. **API** #### API Using the [Render API](api), you enable storage autoscaling with the [Update Postgres instance](https://api-docs.render.com/reference/update-postgres/) endpoint. In your request, set the `enableDiskAutoscaling` parameter to `true`. ### Increasing storage manually Manually add storage to your database with any of the following methods: **Dashboard** #### Dashboard 1. From your database's *Info* page in the [Render Dashboard][dboard], scroll down to the *PostgreSQL Instance* section and click *Update*: [img] 2. Scroll down to the *Storage* field and provide a new value. - Provide any multiple of 5 GB greater than the current storage capacity. 3. Click *Save Changes*. That's it! The additional storage becomes available within a minute or two. **API** #### API Using the [Render API](api), you increase your database's storage capacity with the [Update Postgres instance](https://api-docs.render.com/reference/update-postgres/) endpoint. Provide the new value in the `diskSizeGB` parameter. Provide any multiple of 5 GB greater than the current storage capacity. ### Running out of storage *If your database exceeds its storage limit, it becomes unhealthy.* Render automatically suspends the database to prevent data loss or other unexpected behavior. To restore your database: 1. In the [Render Dashboard][dboard], scroll to the bottom of your database's *Info* page and click *Resume Database*. 2. Wait a minute or two for the database to finish resuming. 3. Follow the steps to [manually add storage capacity](#increasing-storage-manually). - If you wait too long after resuming, Render will suspend your database again. In this case, return to step 1. Your database will become healthy within a few minutes. ## Changing your instance type You can change your Render Postgres database's instance type, which determines its available RAM and CPU. [View available instance types.](pricing#postgresql) > *Your database will be unavailable temporarily during the change.* > > - With [high availability](postgresql-high-availability) enabled, your database is unavailable for only a few seconds. > - Otherwise, it's unavailable for a few minutes. > > Schedule your change during off hours to minimize user impact. 1. From your database's *Info* page in the [Render Dashboard][dboard], scroll down to the *PostgreSQL Instance* section and click *Update*: [img] 2. Under *Plan Options*, select a new *Instance Type*. - If your database currently uses a [legacy instance type](postgresql-legacy-instance-types), you won't be able to move _back_ to a legacy instance type after changing. 3. Click *Save Changes*. That's it! Your new instance will be available within a few minutes. ## Adding multiple databases to a single instance You can create additional databases in your Render Postgres instance with the following steps: 1. In your terminal, open a psql session to your instance using the *PSQL Command* provided in the [Render Dashboard][dboard]: [img] 2. Run `CREATE DATABASE `, providing the name for your new database. You're all set! Use your instance's same internal and external URLs to connect, except substitute your new database's name as the final component: ``` postgresql://USER:PASSWORD@INTERNAL_HOST:PORT/DATABASE ``` ## Encryption Render Postgres databases are encrypted at rest using AES-256 data encryption. This encryption applies to both primary and replica instances, along with all backups. [External connections](#external-connections) to your database are encrypted in transit using Render-managed TLS certificates. ## Metrics and logs ### Dashboard View a variety of metrics for your database (disk usage, active connections, etc.) from its **Metrics** page in the [Render Dashboard][dboard]: [img] For details, see [Service Metrics](service-metrics#available-metrics). ### Datadog The Datadog integration provides additional metrics related to your PostgreSQL instance's host and disk. You can also use the Datadog UI to create dashboards and alerts for your database. For details, see the [Datadog integration docs](datadog#setting-up-postgres-monitoring). ### Viewing slow query logs Queries that take longer than 2 seconds are logged with a line that starts with `duration:` followed by the SQL statement. Here's an example: [img] ## Deleting your database > *Render does not retain backups or snapshots of a deleted database instance!* > > Make sure to download any necessary backups before deleting your database. You can delete a database instance in the [Render Dashboard][dboard]. Scroll down to the bottom of your database's *Info* page and click *Delete Database*. ## Additional topics See articles on the following: - [Recovery and backups](postgresql-backups) - [Read replicas](postgresql-read-replicas) - [High availability](postgresql-high-availability) - [Upgrading your PostgreSQL version](postgresql-upgrading) - [Connection pooling](postgresql-connection-pooling) - [Render Postgres extensions](postgresql-extensions) - [Performance troubleshooting](postgresql-performance-troubleshooting) # Render Postgres Recovery and Backups > *Need to recover lost data? [*Start here.*](#perform-a-recovery)* > > We're happy to help with restores and disaster recovery. Reach out to our support team in the [Render Dashboard][dboard]. Render continually backs up paid Render Postgres databases to provide *point-in-time recovery* (PITR). This enables you to restore your database to any previous state from the past few days, so you can recover from an accidental table drop or other data loss. Your database's available recovery window depends on your [workspace plan](pricing): | Workspace plan | Recovery window | | ---------------------- | --------------- | | Hobby | Past 3 days | | Professional or higher | Past 7 days | When you trigger PITR, Render spins up a _new_ database instance that reflects your original instance's state at a specified time in the past. This enables you to validate the new instance in isolation before updating your services to use it. [diagram] - *If your recovery instance reflects the state you expect,* you can then configure your other services to use it instead of the original instance. - *Otherwise,* you can delete the recovery instance and initiate a new recovery using a different point in time. ## Perform a recovery > *Render does not provide recovery capabilities for the Free Render Postgres instance type.* > > To enable these capabilities, [upgrade your instance type](postgresql-creating-connecting#changing-your-instance-type). 1. In the [Render Dashboard][dboard], select your database from the service list and open its *Recovery* page. 2. Scroll down to the *Point-in-Time Recovery* section and click *Restore Database*: [img] 3. The following form appears: [img] 4. Provide a name for the new database instance. 5. Specify an available date and time to restore to. - You can't restore to a time that's within ten minutes of the current time. 6. Select whether to *Copy Existing Settings*. - If you select *No*, you'll have the option to specify a different instance type, Datadog API key, and/or project for the recovery instance. - The recovery instance _always_ copies the [IP address allow list](postgresql-creating-connecting#restricting-external-access) from the original instance. 7. Click *Start Recovery* to initiate the restore. - If you selected *No* in the previous step, click *Customize Recovery* and then provide your new settings for the recovery instance. 8. In your service list, the recovery instance's status will advance from *Recovery In Progress* to *Creating*, and then to *Available* when it's ready to accept connections. 9. Validate that the data in the recovery instance is what you expect. - You can connect to the recovery instance from your terminal using the *PSQL Command* provided on the database's *Info* page. 10. Update your services and other tools to connect to the recovery instance instead of the original instance. - The recovery instance's name and connection strings are available in the Render Dashboard. - [Environment groups](configure-environment-variables#environment-groups) enable you to update the connection string for multiple services in one place. 11. After verifying that all systems are successfully connected to the recovery instance, you can delete or suspend the original instance. The recovery is complete. Your recovery instance is now your primary instance. ## Logical backups You can create and export logical backups of your database in the [Render Dashboard][dboard]. Download these backups for long-term retention or to restore into a new database instance. Render retains logical backups for seven days after creation, regardless of your workspace plan. > *Render does not create logical backups for the Free Render Postgres instance type.* > > To create a logical backup for a free instance, do one of the following: > > - [Upgrade your instance type.](postgresql-creating-connecting#changing-your-instance-type) > - Use the `pg_dump` utility from your local machine. ### Trigger a backup From your database's *Recovery* page, click *Create export*: [img] > You can't trigger an export if _another_ export is in progress for the same database. When it's ready, your database's new export appears in the table on its *Recovery* page. Each export is provided as a compressed directory file (`.dir.tar.gz`). Click any export's download link to save it to your local machine. > *Interested in automating logical backup retention?* > > See [Backup Render Postgres to Amazon S3](backup-postgresql-to-s3). This guide walks through creating a [cron job](cronjobs) that uploads SQL backups from `pg_dump` to S3. ### Restoring from a backup file > *Read this before you proceed:* > > - The commands below include flags to _drop_ relevant databases and then recreate them. > - Do not restore into a database that contains important data in the same schema as the export. > - In the event of data loss, we recommend instead using [*point-in-time recovery*](#perform-a-recovery) to restore your database. > - PITR almost always enables you to recover more recent data than what's available in your latest export. You can use an exported backup to restore your data into a PostgreSQL instance running on Render or your local machine: 1. Go to your database's *Recovery* page and click the `.dir.tar.gz` download link for any available export. - The downloaded export's filename indicates its time of creation (e.g., `2025-02-03T19_21Z.dir.tar.gz`). 2. If you're restoring into a Render-hosted database, obtain its [external database URL](postgresql-creating-connecting#external-connections). 3. Install [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html) for the major version of the PostgreSQL instance you are restoring into. 4. Run the following commands, providing your export file and the target database URL where indicated: ```shell{outputLines:1,3-4} # Extract the export tar -zxvf 2025-02-03T19_21Z.dir.tar.gz # Restore the export to your database using its external connection string (available in the dashboard) pg_restore --dbname=$external_database_url --verbose --clean --if-exists --no-owner --no-privileges --format=directory 2025-02-03T19:21Z/my_render_database_name ``` # Database Credentials for Render Postgres You can add and delete PostgreSQL users from your Render Postgres database. This is most commonly helpful for performing a [credential rotation](#rotating-credentials). Any users you [create](#adding-a-user) via the Render Dashboard or API are listed in the *Credentials* section of your database's *Info* page: [img] Render provides enhanced management of these users, including automatic updates to [Blueprint](infrastructure-as-code)-managed environment variables that [reference the database's connection string](blueprint-spec#referencing-values-from-other-services). > *Users created directly with the [`CREATE USER`](https://www.postgresql.org/docs/8.0/sql-createuser.html) command are _not_ managed by Render.* > > - These users are not displayed in the Render Dashboard or API. > - These users do not replace your database's [default user](#the-default-user) on creation. ## Managing PostgreSQL users ### The default user Whenever you [add a PostgreSQL user](#adding-a-user) to your database, that user becomes the database's new "default" user: - The default user's credentials appear in your database's [connection URLs](postgresql-creating-connecting#connect-to-your-database) shown in the Render Dashboard. - Default user credentials are also used by environment variables that [reference connection strings](blueprint-spec#referencing-values-from-other-services) in a Render [Blueprint](infrastructure-as-code). - Whenever the default user changes, these environment variables update their value on the next Blueprint sync. - All other existing PostgreSQL users remain valid. However, you can no longer view credentials for the previous default user. ### Adding a user Add a new Render-managed PostgreSQL user to your database with any of the following methods: > *Users created directly with the [`CREATE USER`](https://www.postgresql.org/docs/8.0/sql-createuser.html) command are _not_ managed by Render.* > > - These users are not displayed in the Render Dashboard or API. > - These users do not replace your database's [default user](#the-default-user) on creation. **Dashboard** #### Dashboard 1. From your database's *Info* page in the [Render Dashboard][dboard], scroll down to the *Credentials* section: [img] 2. Click *+ New default credential*. The creation dialog appears. 3. Optionally provide a username for the new credential. If you don't, Render generates one for you. 4. Click *Create Credential*. That's it! Your newly created user appears in the table as the new default user for your database. **API** #### API Using the [Render API](api), you can create a new credential with the [Create PostgreSQL User](https://api-docs.render.com/reference/create-postgres-user) endpoint. Provide a `username` in your request body: ```json { "username": "my_new_user" } ``` ### Viewing users You can view the active PostgreSQL users for your database. Note that these methods only show users that you added with one of the methods [above](#adding-a-user), not built-in PostgreSQL roles or users created via `CREATE USER`. **Dashboard** #### Dashboard From your database's **Info** page in the [Render Dashboard][dboard], scroll down to the **Credentials** section. This section displays all active users, with the default user indicated by a label: [img] **API** #### API Using the [Render API](api), you can list all PostgreSQL users with the [List PostgreSQL Users](https://api-docs.render.com/reference/list-postgres-users) endpoint. Each object in the response array includes `username`, `createdAt`, and `default` properties. The `default` property is `true` for the default user. ### Deleting a user You can delete PostgreSQL users if they have compromised credentials or are no longer needed. > **You can't delete your database's current default user.** > > To perform a credential rotation involving the default user, first [create a new user](#adding-a-user) to make it the new default. You can then delete the previous default user. **Dashboard** #### Dashboard 1. From your database's **Info** page in the [Render Dashboard][dboard], scroll down to the **Credentials** section: [img] 2. Click the trashcan icon next to the user you want to delete. 3. Confirm the deletion. That's it! The user and its credentials are removed immediately. **API** #### API Using the [Render API](api), you can delete a credential with the [Delete PostgreSQL User](https://api-docs.render.com/reference/delete-postgres-user) endpoint. > **Render never fully deletes your database's _original_ user.** > > If you "delete" the original user, Render actually deactivates it by revoking its login privileges. This is a safeguard to preserve database objects owned by the original user. ## Rotating credentials The following diagram illustrates performing a zero-downtime credential rotation for your database (steps described below): [diagram] 1. [Add a new PostgreSQL user](#adding-a-user) to your database. 2. Update the configuration of all apps and services that connect to your database to use the new user's credentials. - For [Blueprint](infrastructure-as-code)-managed services that [dynamically reference the database's connection string](blueprint-spec#referencing-values-from-other-services), perform a manual Blueprint sync to update the environment variable. 3. Redeploy all of your connected apps and services with the updated credentials. 4. Monitor your database to confirm when no connections are using the original user's credentials. You can help confirm this with a query like the following (substitute the original user's name where indicated): ```sql SELECT COUNT(*) FROM pg_stat_activity WHERE usename = 'ORIGINAL_USER_NAME_HERE'; ``` 5. [Delete the original user.](#deleting-a-user) ## Webhook support Credential management actions trigger the following [webhook](webhooks) events: | Event | Description | |--------|--------| | `PostgresCredentialsCreated` | Triggers when a PostgreSQL user is created. | | `PostgresCredentialsDeleted` | Triggers when a PostgreSQL user is deleted. | # Read Replicas for Render Postgres *Read replicas* are separate instances of your [Render Postgres database](postgresql) that only allow read access. As you write data to your primary instance, Render asynchronously replicates those changes to your read replicas: [diagram] Read replicas can reduce load on your primary instance and make one-off queries safer. They're great for analysis tools that don't need to write data, or for running computationally expensive queries without affecting the performance of your primary instance. > Read replicas always have the same instance type and storage as their primary database and are billed accordingly. ## Requirements For your Render Postgres database to support read replicas, it must: - Have at least 10 GB of storage - Use the *Basic-1gb* instance type or higher - If your database uses a [legacy instance type](postgresql-legacy-instance-types), it must use the *Standard* instance type or higher. Any database that meets these requirements can have up to five read replicas. ## Setup Go to your database's *Info* page in the [Render Dashboard][dboard] and click *Add Read Replica*: [img] A confirmation dialog appears. If you confirm, Render spins up the replica instance and starts copying over data from the primary instance. That's it! Your read replica should become available within a few minutes. If it takes longer, please reach out to our support team in the [Render Dashboard](https://dashboard.render.com?contact-support). After a read replica becomes available, you can connect to it just like you do your primary instance, using its [internal or external connection URL](postgresql-creating-connecting#connect-to-your-database). ## Performance Changes to your primary database are synced to its read replicas after a short delay. This means replicas are best suited to use cases that don't require instant access to the most recent data possible. The length of this delay depends on your primary instance's load. You can monitor this from the primary instance's *Metrics* page, under *Replication Lag*. ## Read replicas vs. high availability Read replica instances are different from a *standby* instance that's used for [high availability](postgresql-high-availability). Read replicas help decrease load on your primary instance, and they're safer for one-off and expensive queries. In contrast, a standby instance helps reduce downtime in the event of instance failure. # High Availability for Render Postgres You can enable *High Availability* (*HA*) for any Render Postgres database with the [required specs](#prerequisites). When you enable HA, Render maintains a separate *standby* instance of your database that asynchronously replicates the state of your *primary* instance: [diagram] The standby runs in the same [region](regions) as the primary, but in a different "zone" that's geographically separate from the primary (on the order of tens of kilometers). This separation helps to maximize availability in the event of a major disruption. If a critical issue causes your primary instance to become unavailable for 30 seconds, Render detects this and [automatically fails over](#automatic-failover) to the standby to keep you up and running: [diagram] This process takes a few seconds, after which the standby instance becomes the new primary (now hosted at the same URL as the _original_ primary). When the degraded instance becomes healthy again, it becomes the new standby. > Your standby instance always has the same instance type and storage as your primary instance and is billed accordingly. ## Prerequisites For your database to support high availability, it must: - Use a *Pro* or *Accelerated* [instance type](pricing#postgresql) - Use PostgreSQL version 13 or later > *If your database uses a [*legacy instance type*](postgresql-legacy-instance-types), it must:* > > - Use the *Pro* instance type or higher > - Use PostgreSQL version 13 or later ## Setup > *Enabling HA requires a database restart!* > > Your database will be unavailable temporarily (usually for less than five minutes). Schedule your activation of this feature accordingly. 1. In the [Render Dashboard][dboard], select your database and open its *Info* page. 2. Scroll down to the *High Availability* section and toggle the switch: [img] 3. A confirmation dialog appears. Review the details and then click *Enable HA*. That's it! Your database will restart with HA enabled. ## Failover *Failover* refers to the process of swapping out your primary database instance for your standby instance. Render performs failover [automatically](#automatic-failover) when your primary instance becomes unavailable, and you can perform a [manual](#manual-failover) failover for testing purposes. In all cases, failover takes just a few seconds, after which your other services can [reconnect](#reconnecting-after-a-failover) to your database. ### Automatic failover Render automatically triggers a failover to your database's standby instance whenever your primary instance becomes unavailable for 30 seconds. Your primary instance might become unavailable because: - The node running the instance becomes unresponsive or goes down. - A network disruption prevents communication with the instance. - The PostgreSQL process itself crashes. > Automatic failover might fail to preserve a small number of the most recent writes to your degraded primary instance. For details, see [Limitations of HA](#limitations-of-ha). ### Manual failover > *Manual failover is intended for testing and compliance purposes.* [Automatic failover](#automatic-failover) handles scenarios where your primary instance becomes unavailable. You can manually trigger a failover to your database's standby instance from the [Render Dashboard][dboard]. You might want to do this to test out reconnection behavior for your apps, or to demonstrate failover capabilities for compliance purposes. Go to your database's *Info* page and click *Trigger Failover* under the *High Availability* section: [img] Performing a manual failover with a healthy primary instance _almost never_ causes any loss of data. It's possible (but unlikely) that changes to your primary instance in the last few seconds before the failover will be lost. ### Reconnecting after a failover Whenever a failover occurs ([automatic](#automatic-failover) or [manual](#manual-failover)), all active connections to your primary instance are terminated. Clients need to reconnect to the _new_ primary instance, which becomes reachable at the exact same database URL. To enable reconnection, make sure your clients include retry functionality in their connection logic. ## Limitations of HA - HA increases your database's response latency by approximately 1 millisecond. - This is because Render operates a proxy in front of the database to identify connectivity issues and trigger failovers. - Render runs your primary and standby instances on geographically separated nodes in the same region. In the unlikely event that _both_ nodes are affected by an incident, your database will experience downtime. - When an [automatic failover](#automatic-failover) occurs, a small number of the most recent writes to your degraded primary instance might not be represented in your standby instance. These changes are lost. - This is because data is replicated asynchronously, and the primary might not have pushed the most recent writes to the standby before the degradation occurred. - In almost all cases, no more than a few seconds of changes are lost. - A [manual failover](#manual-failover) _almost never_ results in lost changes, but it's possible that changes to your primary instance in the last few seconds before the failover will be lost. - Failover isn't possible if your standby instance isn't available. This might occur for one of the following reasons: - The standby is affected by the same severe incident as the primary. - The standby is affected by an unrelated, simultaneous incident. - Render is performing routine maintenance on the standby. - An incident occurs shortly after a _previous_ failover occurred, and the degraded instance has not yet become healthy. - An incident occurs shortly after you initialize your primary database (before the standby is _also_ initialized). - You can't connect to a HA database's standby instance or use it for query scaling purposes. For this use case, instead create a [read replica](postgresql-read-replicas). # Admin Apps for Render Postgres Render provides simplified deployment and configuration for popular PostgreSQL admin apps right from the [Render Dashboard][dboard]: | App | Description | | ---------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | | [*pgAdmin*](https://www.pgadmin.org/) | General-purpose PostgreSQL administration. Manage schemas, tables, and indexes. Run one-off queries and view their query plans. | | [*PgHero*](https://github.com/ankane/pghero) | A performance dashboard for your database. Monitor resource usage, analyze active connections, and inspect recent queries to identify bottlenecks. | > *Interested in simplified deployment for another admin app?* Please [submit a feature request](https://feedback.render.com). Render deploys each app as a standard [web service](web-services) and automatically connects it to your database over your [private network](private-network). ## Setup 1. In the [Render Dashboard][dboard], select your database from the service list and open its *Apps* page: [img] 2. Click the *Deploy app* button for the app you want to create. A dialog appears with authentication and billing details: [img] 3. *Securely store the autogenerated credentials* for the app (such as in a password manager). You'll use these credentials to log in to the app's web interface. - You can optionally customize these credentials. - The credentials will also be available as environment variables in the created Render service. 4. Review the billing details. If everything looks good, click the *Deploy* button. - Render uses the smallest compute specs possible according to the app's requirements. You're all set! Render deploys your admin app as a [web service](web-services) in the same region as your database. When the deploy completes, the app's listing displays a *Deployed* label, along with an *Open app* button: [img] Click *Open app* to open the app's web interface in a new browser tab and log in with your credentials. The app is hosted on an `onrender.com` subdomain like any other web service. > *Missing your app's credentials?* See [App credentials](#app-credentials). ## Managing an existing app After you create a PostgreSQL admin app, it appears alongside your other services in the [Render Dashboard][dboard]: [img] You can manage or delete this service like any other Render service. *Note that modifying the service's environment variables can break the app's connection to your database.* ### App credentials > Your app credentials are separate from your _database_ credentials. Database credentials are available from your database's *Info* page in the [Render Dashboard][dboard]. If you didn't save your app credentials during [setup](#setup), they're available as [environment variables](configure-environment-variables) set for the created Render service: - *PgHero:* `PGHERO_USERNAME` and `PGHERO_PASSWORD` - *pgAdmin:* `PGADMIN_DEFAULT_EMAIL` and `PGADMIN_DEFAULT_PASSWORD` - Note that if you change your password via the pgAdmin UI, the `PGADMIN_DEFAULT_PASSWORD` environment variable does _not_ update. - You cannot trigger a password reset email from the pgAdmin UI. If you lose your password, delete and redeploy the pgAdmin service. # Supported Extensions for Render Postgres Render Postgres databases support most popular extensions (`pgvector`, `postgis`, and so on). Your database's PostgreSQL version determines exactly which extensions are supported, along with how you add them: ## PostgreSQL 13 and later To enable any supported extension, run the [`CREATE EXTENSION`](https://www.postgresql.org/docs/current/sql-createextension.html) command like so: ```sql CREATE EXTENSION postgis; ``` To run this command, you can start a psql session in your terminal. Use the **PSQL Command** provided on your database's Info page in the [Render Dashboard][dboard]. Except where noted, these extensions are available for all databases running PostgreSQL 13 or later: - [adminpack](https://www.postgresql.org/docs/current/adminpack.html) - [amcheck](https://www.postgresql.org/docs/current/amcheck.html) - [autoinc](https://www.postgresql.org/docs/current/contrib-spi.html) - [bloom](https://www.postgresql.org/docs/current/bloom.html) - [btree_gin](https://www.postgresql.org/docs/current/btree-gin.html) - [btree_gist](https://www.postgresql.org/docs/current/btree-gist.html) - [citext](https://www.postgresql.org/docs/current/citext.html) - [cube](https://www.postgresql.org/docs/current/cube.html) - [dblink](https://www.postgresql.org/docs/current/dblink.html) - [dict_int](https://www.postgresql.org/docs/current/dict-int.html) - [dict_xsyn](https://www.postgresql.org/docs/current/dict-xsyn.html) - [earthdistance](https://www.postgresql.org/docs/current/earthdistance.html) - [file_fdw](https://www.postgresql.org/docs/current/file-fdw.html) - [fuzzystrmatch](https://www.postgresql.org/docs/current/fuzzystrmatch.html) - [hstore](https://www.postgresql.org/docs/current/hstore.html) - [insert_username](https://www.postgresql.org/docs/current/contrib-spi.html) - [intagg](https://www.postgresql.org/docs/current/intagg.html) - [intarray](https://www.postgresql.org/docs/current/intarray.html) - [isn](https://www.postgresql.org/docs/current/isn.html) - [lo](https://www.postgresql.org/docs/current/lo.html) - [ltree](https://www.postgresql.org/docs/current/ltree.html) - [moddatetime](https://www.postgresql.org/docs/current/contrib-spi.html) - [old_snapshot](https://www.postgresql.org/docs/current/oldsnapshot.html)\* - \*Requires PostgreSQL 14 or later. - [pageinspect](https://www.postgresql.org/docs/current/pageinspect.html) - [pg_buffercache](https://www.postgresql.org/docs/current/pgbuffercache.html) - [pg_duckdb](https://github.com/duckdb/pg_duckdb)\* - \*Requires PostgreSQL 16 or later. Database must have been created after 30 January 2025. - [pg_freespacemap](https://www.postgresql.org/docs/current/pgfreespacemap.html) - [pg_ivm](https://github.com/sraoss/pg_ivm) - [pg_prewarm](https://www.postgresql.org/docs/current/pgprewarm.html) - [pg_similarity](https://github.com/eulerto/pg_similarity)\* - \*This extension is currently not available for PostgreSQL 16 or later. - [pg_stat_statements](https://www.postgresql.org/docs/current/pgstatstatements.html) - [pg_surgery](https://www.postgresql.org/docs/current/pgsurgery.html)\* - \*Requires PostgreSQL 14 or later. - [pg_trgm](https://www.postgresql.org/docs/current/pgtrgm.html) - [pg_visibility](https://www.postgresql.org/docs/current/pgvisibility.html) - [pgaudit](https://www.pgaudit.org/) - [pgcrypto](https://www.postgresql.org/docs/current/pgcrypto.html) - [pgrowlocks](https://www.postgresql.org/docs/current/pgrowlocks.html) - [pgstattuple](https://www.postgresql.org/docs/current/pgstattuple.html) - [pgvector](https://github.com/pgvector/pgvector)\* - \*Enable this extension with `CREATE EXTENSION vector;` - [plpgsql](https://www.postgresql.org/docs/current/plpgsql.html)\* - \*This extension is enabled by default. - [postgis](https://postgis.net) - [postgis_raster](https://trac.osgeo.org/postgis/wiki/WKTRaster) - [postgis_tiger_geocoder](https://postgis.net/docs/Extras.html#Tiger_Geocoder) - [postgis_topology](https://postgis.net/docs/Topology.html) - [refint](https://www.postgresql.org/docs/current/contrib-spi.html) - [seg](https://www.postgresql.org/docs/current/seg.html) - [sslinfo](https://www.postgresql.org/docs/current/sslinfo.html) - [tablefunc](https://www.postgresql.org/docs/current/tablefunc.html) - [tcn](https://www.postgresql.org/docs/current/tcn.html) - [timescaledb](https://www.timescale.com/)\* - \*Database must have been created after 12 January 2023. [Community features](https://docs.timescale.com/about/latest/timescaledb-editions/#feature-comparison) are not available. - [tsm_system_rows](https://www.postgresql.org/docs/current/tsm-system-rows.html) - [tsm_system_time](https://www.postgresql.org/docs/current/tsm-system-time.html) - [unaccent](https://www.postgresql.org/docs/current/unaccent.html) - [uuid-ossp](https://www.postgresql.org/docs/current/uuid-ossp.html) - [xml2](https://www.postgresql.org/docs/current/xml2.html) ## PostgreSQL 11 and 12 On Render Postgres databases running PostgreSQL 11 or 12, **supported extensions are enabled by default and cannot be customized.** These extensions are enabled for all PostgreSQL 11 and 12 databases: > Some of these extensions (like `postgis`) create additional schemas (like `topology`) and tables (like `spatial_ref_sys`). - [bloom](https://www.postgresql.org/docs/12/bloom.html) - [btree_gin](https://www.postgresql.org/docs/12/btree-gin.html) - [btree_gist](https://www.postgresql.org/docs/12/btree-gist.html) - [citext](https://www.postgresql.org/docs/12/citext.html) - [cube](https://www.postgresql.org/docs/12/cube.html) - [dblink](https://www.postgresql.org/docs/12/dblink.html) - [dict_int](https://www.postgresql.org/docs/12/dict-int.html) - [dict_xsyn](https://www.postgresql.org/docs/12/dict-xsyn.html) - [earthdistance](https://www.postgresql.org/docs/12/earthdistance.html) - [fuzzystrmatch](https://www.postgresql.org/docs/12/fuzzystrmatch.html) - [hstore](https://www.postgresql.org/docs/12/hstore.html) - [intagg](https://www.postgresql.org/docs/12/intagg.html) - [intarray](https://www.postgresql.org/docs/12/intarray.html) - [isn](https://www.postgresql.org/docs/12/isn.html) - [lo](https://www.postgresql.org/docs/12/lo.html) - [ltree](https://www.postgresql.org/docs/12/ltree.html) - [pg_buffercache](https://www.postgresql.org/docs/12/pgbuffercache.html) - [pg_prewarm](https://www.postgresql.org/docs/12/pgprewarm.html) - [pg_stat_statements](https://www.postgresql.org/docs/12/pgstatstatements.html) - [pg_trgm](https://www.postgresql.org/docs/12/pgtrgm.html) - [pgcrypto](https://www.postgresql.org/docs/12/pgcrypto.html) - [pgrowlocks](https://www.postgresql.org/docs/12/pgrowlocks.html) - [pgstattuple](https://www.postgresql.org/docs/12/pgstattuple.html) - [pgvector](https://github.com/pgvector/pgvector) - \*Database must have been created or received maintenance after 11 April 2024. Contact support for assistance. - [postgis](https://postgis.net) - Not available on the Starter instance type for PostgreSQL 12, due to resource requirements. - [postgis_tiger_geocoder](https://postgis.net/docs/Extras.html#Tiger_Geocoder) - [postgis_topology](https://postgis.net/docs/Topology.html) - [tablefunc](https://www.postgresql.org/docs/12/tablefunc.html) - [unaccent](https://www.postgresql.org/docs/12/unaccent.html) - [uuid-ossp](https://www.postgresql.org/docs/12/uuid-ossp.html) ### Removing extensions If you don't need some of these extensions and want to remove them from your PostgreSQL 11 or 12 database, [email support](mailto:support@render.com) and we'll be happy to delete them for you. # Render Postgres Connection Pooling Render Postgres databases support a limited number of simultaneous direct connections. If your database is approaching this limit, you can set up *connection pooling* on Render using [PgBouncer](https://www.pgbouncer.org/). Using this setup, your other services connect to your PgBouncer instance instead of connecting directly to your database. PgBouncer reuses its pool of active database connections to serve queries from any number of different services. ## Setup You can deploy PgBouncer on Render either by [declaring its configuration](infrastructure-as-code) in a `render.yaml` blueprint file, or by manually configuring a private service from your dashboard. Both options are covered below. ### Deploying with a `render.yaml` blueprint 1. Create a file named `render.yaml` in the root of a Git repository. This file describes your PgBouncer instance, along with the database it serves: ```yaml databases: - name: mysite databaseName: mysite user: mysite services: - type: pserv name: pgbouncer runtime: docker plan: standard repo: https://github.com/render-oss/docker-pgbouncer envVars: - key: DATABASE_URL fromDatabase: name: mysite property: connectionString - key: POOL_MODE value: transaction - key: SERVER_RESET_QUERY value: DISCARD ALL - key: MAX_CLIENT_CONN value: 500 - key: DEFAULT_POOL_SIZE value: 50 ``` 2. Commit your changes and push them to GitHub/GitLab/Bitbucket. 3. In the Render Dashboard, go to the [Blueprints](https://dashboard.render.com/blueprints) page and click **New Blueprint Instance**. Select the repository with the blueprint file (give Render permission to access it if you haven't already) and click **Approve** on the next screen. That's it! Render creates your database and PgBouncer instance. You can navigate to your new `pgbouncer` service in the dashboard to find the URL that your applications should connect to. You can connect using the internal connection string from your database, replacing the database host with internal hostname of your PgBouncer instance `postgresql://USER:PASSWORD@PGBOUNCER_HOST:PORT/DATABASE`. ### Creating services from the dashboard 1. Create a new [Render Postgres database](postgresql). Note your database's **internal database URL** (you'll need it in a later step). 2. Create a new **Private Service** and point it to Render's PgBouncer Docker Image: `https://github.com/render-oss/docker-pgbouncer` 3. Set the private service's **Language** field to `Docker`. 4. Add the following environment variables to the private service: | Key | Value | | -------------------- | ---------------------------------------------------------------- | | `DATABASE_URL` | The **internal database URL** for the database you created above | | `POOL_MODE ` | `transaction` | | `SERVER_RESET_QUERY` | `DISCARD ALL` | | `MAX_CLIENT_CONN` | `500` | | `DEFAULT_POOL_SIZE` | `50` | That's it! Save your private service to deploy your PgBouncer instance on Render. # Upgrading Your Render Postgres Version > *Upgrading your database requires downtime.* Schedule upgrades accordingly. You can upgrade your Render Postgres database to a more recent major version of PostgreSQL in one of the following ways: - Perform an [in-place upgrade](#upgrading-in-place) - This method always upgrades to the latest major version supported by Render (currently ). - Create a _new_ database and [migrate your data](#migrating-to-a-new-instance) - This method enables you to upgrade to _any_ major version supported by Render. ## Version support | Full support | Legacy support\* | Not yet supported | | ------------------------------------------------- | ---------------- | ------------------------------- | | | 12, 11 | | \*Only workspaces with an _existing_ database on PostgreSQL 11 or 12 can create new databases on the corresponding version. View your database's current version from its *Info* page in the [Render Dashboard][dboard]: [img] ## Upgrading in-place Render supports in-place database upgrades to PostgreSQL from any previous version. ### 1. Perform a test upgrade (recommended) Before you upgrade, we strongly recommend creating a temporary copy of your database and performing a test upgrade on the copy. This way, you can: - Confirm that your database will upgrade successfully - Estimate how long your database upgrade will take 1. In the [Render Dashboard][dboard], go to your database's *Info* page and click its current version: [img] The following page appears with recommended upgrade steps: [img] *Don't see the "Clone this database" button?* This means that your database uses a legacy instance type that doesn't yet have [point-in-time recovery](postgresql-backups) (PITR) enabled. Render uses PITR to create your database copy. To enable PITR immediately, you can move your database to a new flexible PostgreSQL plan by [changing its instance type](postgresql-creating-connecting#changing-your-instance-type). Otherwise, PITR will be enabled during your database's next planned maintenance. We strongly recommend that you enable PITR for any database before you upgrade it in-place, so that you can copy it for testing. 2. Click *Clone this database*. Render immediately creates a new database and starts replicating the primary database's state from _ten minutes earlier_. - The clone will not reflect any changes made to the primary database after the restore point. 3. Click *View PostgreSQL clone* to jump to the clone's upgrade page: [img] You can't upgrade the clone until data replication completes. 4. When the *Upgrade to PostgreSQL * button becomes active, click it to kick off the clone's in-place upgrade. A log explorer appears and displays log entries from the upgrade process. Depending on the size of your database, the upgrade process might take up to one hour. > Note the total duration of the test upgrade, which provides a helpful estimate for the primary database upgrade. 5. When the upgrade completes, the clone's status changes from *Upgrading* to *Available*. You can now run commands on the upgraded clone to confirm that it behaves as expected. > *If the upgrade fails, the clone remains on its original PostgreSQL version.* > > Review the logs to identify any issues. If the underlying cause or resolution is unclear, reach out to support in the Render Dashboard. 6. When you're done testing the clone, you can delete it from its *Info* page. ### 2. Upgrade your database After you perform a successful [test upgrade](#1-perform-a-test-upgrade-recommended), you can confidently upgrade your primary database. > *Your database will be unavailable during the upgrade.* > > The upgrade might take up to an hour. If you ran a test upgrade, the duration of the primary database upgrade should be similar. 1. In the [Render Dashboard][dboard], go to your database's *Info* page and click your current version to open the upgrade page: [img] [img] 2. Click *Upgrade to PostgreSQL * to start the upgrade process. A log explorer appears and displays log entries from the upgrade process. 3. When the upgrade completes, your database's status changes from *Upgrading* to *Available*. > *If the upgrade fails, your database remains on its original PostgreSQL version.* > > Review the logs to identify any issues. If the underlying cause or resolution is unclear, reach out to support in the Render Dashboard. After the upgrade, your other apps and services resume connecting to your database using the same credentials and connection strings as before. ## Migrating to a new instance These are the high-level steps for moving your data to a new Render Postgres database with a higher major version: 1. [Create a new database](https://dashboard.render.com/new/database) with the desired version. 2. Disable or suspend any applications that write to your existing database. - This guarantees that you can take an up-to-date backup of your existing database. 3. [Take a backup](#taking-a-backup) of your existing database. 4. Restore the backup to your _new_ database. 5. Point all of your applications at the new database. Re-enable the applications that perform writes. *However*, before you complete the above, we recommend attempting a "dry run" by performing just _these_ steps: 1. [Create a new database](https://dashboard.render.com/new/database) with the desired version. 2. [Take a backup](#taking-a-backup) of your existing database. 3. Restore the backup to your _new_ database. The dry run enables you to confirm whether a full migration will succeed, and it doesn't require suspending or modifying any of your applications. ### Taking a backup If your existing database uses a *Standard* instance type or higher, you can trigger a backup directly from your database's *Recovery* page in the [Render Dashboard][dboard]: [img] Otherwise, you can take a backup using [`pg_dump`](https://www.postgresql.org/docs/current/backup-dump.html). This command dumps your database to a local file (make sure to swap out the appropriate database variables, as well as the hostname for Frankfurt region databases): ```bash PGPASSWORD={PASSWORD} pg_dump -h oregon-postgres.render.com -U {DATABASE_USER} {DATABASE_NAME} \ -n public --no-owner > database_dump.sql ``` You can then restore this data to your new database: ```bash PGPASSWORD={PASSWORD} psql -h oregon-postgres.render.com -U {DATABASE_USER} {DATABASE_NAME} < database_dump.sql ``` If you have _multiple_ databases in your Render Postgres instance, repeat the steps above for each database you want to migrate. Alternatively, you can use [`pg_dumpall`](https://www.postgresql.org/docs/current/app-pg-dumpall.html) to automatically back up all databases in your instance. For more details on this process, see [Render Postgres Backups and Recovery](postgresql-backups). ### Troubleshooting If certain statements fail to execute due to a version incompatibility, you might need to manually modify your database dump to resolve these issues. Review the changelogs for each PostgreSQL version ahead of time to identify any such incompatibilities and their resolutions: ## Minor version updates Render periodically upgrades your database's minor PostgreSQL version to apply the latest security fixes. Whenever one of these updates requires downtime, we notify you ahead of time via email. In the [Render Dashboard][dboard], you can schedule your preferred maintenance window or trigger the maintenance manually. # Flexible Plans for Render Postgres > *Flexible Render Postgres plans are now enabled for all workspaces.* Render has rolled out flexible plans for Render Postgres. With these plans, you can: - Increase your database's storage at any time, without downtime - Adjust your database's CPU and RAM, totally independent of storage - Choose from a much wider range of compute options, up to 128 CPUs and 1 TB RAM Additionally, we've expanded the availability of [certain PostgreSQL features](#expanded-feature-availability). For example, point-in-time recovery is being added to all paid databases. > *[*Legacy instances*](postgresql-legacy-instance-types) keep their existing plan and pricing.* > > You can optionally move a legacy instance to a flexible plan by [moving it](postgresql-creating-connecting#changing-your-instance-type) to any [new paid instance type](#new-instance-types). Note that your database will be unavailable for a few minutes during the switch, and you can't move _back_ to a legacy instance type. ## What's new ### Independent storage and compute *Prior to flexible plans,* a database's instance type always determined both its storage _and_ compute specs: [img] *With this refresh,* the new Render Postgres [instance types](#new-instance-types) _only_ determine compute specs—you can set storage independently. Each database is billed according to its particular combination of instance type and storage, so you can pay for exactly the resources you need. - Instance types are billed according to their compute specs, prorated to the second. [See pricing.](#pricing-for-new-instance-types) - Storage is billed at a fixed rate of $0.30 per GB per month, prorated to the second. - You can increase your database's storage at any time, to any multiple of 5 GB. - Adding storage does not require any downtime for your database. - You can't _reduce_ storage for a database. ### New instance types Render Postgres now offers four tiers of instance types: | Tier | Description | |--------|--------| | *Free* | The Free instance type is unchanged. Free databases have a fixed storage of 1 GB, and they expire after 30 days. [Learn more about free Render Postgres databases](free#free-postgres). | | *Basic* | Instance types with compute and pricing comparable to Render's legacy *Starter*, *Standard*, and *Pro* instance types. [See pricing.](pricing#postgresql) | | *Pro* | Instance types with a 1:4 CPU-to-RAM ratio, suitable for production workloads. - *Smallest:* 1 CPU / 4 GB RAM - *Largest:* 128 CPU / 512 GB RAM [See pricing.](pricing#postgresql) | | *Accelerated* | Instance types with an 1:8 CPU-to-RAM ratio, suitable for memory-intensive workloads. - *Smallest:* 1 CPU / 8 GB RAM - *Largest:* 128 CPU / 1 TB RAM [See pricing.](pricing#postgresql) | Each instance type has a name that reflects its tier and RAM, such as *Basic-1gb* or *Accelerated-64gb*. ### Expanded feature availability The following Render Postgres features (some of which were previously limited to *Professional* workspaces or higher) are now available to any database with eligible specs: | Feature | Newly Eligible Databases | |--------|--------| | [*Point-in-time recovery*](postgresql-backups) | All paid databases receive point-in-time recovery (PITR) automatically. Your retention period for PITR depends on your workspace's plan: - *Hobby:* 3 days - *Professional or higher:* 7 days Databases on a [legacy instance type](postgresql-legacy-instance-types) will receive point-in-time recovery as part of their first maintenance period following the release of flexible plans. | | [*Read replicas*](postgresql-read-replicas) | Any database on a flexible plan with at least 0.5 CPU and 10 GB of storage | | [*High availability*](postgresql-high-availability) | - Any database on a flexible plan using a *Pro* or *Accelerated* [instance type](#new-instance-types) - Any database on a [legacy](postgresql-legacy-instance-types) *Pro* or *Pro Plus* instance type | ## Pricing for new instance types [*See the pricing page.*](pricing#postgresql) ## FAQ ### Will Render automatically migrate legacy instances to a flexible plan? *No.* By default, databases on a [legacy instance type](postgresql-legacy-instance-types) keep their current specs and pricing. After flexible plans are enabled for your workspace, you can move an existing database to a flexible plan by changing its instance type in the Render Dashboard. > *Note the following:* > > - If you move to a new instance type, your database will be unavailable for a few minutes while the new instance spins up. > - You can't move a database back to a legacy instance type. ### Can I change my database's instance type? *Yes.* You can [change your database's instance type](postgresql-creating-connecting#changing-your-instance-type) at any time in the Render Dashboard. You can change to a smaller _or_ larger instance type, without changing your storage. > *Note the following:* > > - Your database will be unavailable for a few minutes while the new instance spins up. > - You can't move a paid database to the Free instance type. > - If you've enabled a Render Postgres feature with [minimum spec requirements](#expanded-feature-availability) (such as high availability), you can only move to another instance type that meets those requirements. ### Can I reduce my database's storage? *No.* You can increase an existing database's storage at any time, but you can't reduce it. To reduce your storage, you can create a _new_ database with the desired storage and migrate your data by restoring from a [backup](postgresql-backups#logical-backups). ### How are flexible database plans billed? Each database on a flexible plan is billed according to its combination of instance type and storage: - Instance types are billed according to their compute specs, prorated to the second. [See pricing.](pricing#postgresql) - Storage is billed at $0.30 per GB per month, prorated to the second. # Render Postgres Legacy Instance Types In October 2024, Render introduced [flexible plans](postgresql-refresh) for Render Postgres. These plans enable you to set your database's storage and compute separately. Storage for a database on a flexible plan is billed at a fixed rate per GB, separate from compute. Databases created _before_ this change use *legacy instance types* that determine both storage _and_ compute, billed together. If you have a database on a legacy instance type, you can optionally move it to a flexible plan by [changing its instance type](postgresql-creating-connecting#changing-your-instance-type) in the Render Dashboard. You cannot move _back_ to a legacy instance type. ## Specs > *These specs are provided as reference for existing databases on a legacy instance type.* > > Legacy instance types are not available for new databases. | Legacy Instance Type | Compute | Storage | Max Connections | Price | |--------|--------|--------|--------|--------| | *Starter* | 256 MB RAM 0.1 CPU | 1 GB | 97 | $7/month | | *Standard* | 1 GB RAM 1 CPU | 16 GB | 97 | $20/month | | *Pro* | 4 GB RAM 2 CPU | 96 GB | 97 | $95/month | | *Pro Plus* | 8 GB RAM 4 CPU | 256 GB | 197 | $185/month | # Regions You can deploy Render services to any of the following regions to minimize latency for your users: - Oregon, USA - Ohio, USA - Virginia, USA - Frankfurt, Germany - Singapore We'll continue to add regions over time. If you're interested in a particular region, vote for it at [feedback.render.com](https://feedback.render.com). ## Choosing a region > You don't choose a region for [static sites](static-sites), which are backed by a global CDN. You choose a region for your service or datastore during the creation flow in the [Render Dashboard][dboard]: [img] The dashboard indicates the regions where you already have services (if any). To deploy elsewhere, click *Deploy in a new region*. ## Changing a service's region Render doesn't currently support changing the region for an existing service or database. Instead, create a new service or database in the desired region, then migrate your configuration and data as needed. ## Private networking Each region provides a separate [private network](private-network) for your services. This means that services in _different_ regions can't communicate directly over a private network. To communicate between services across regions, you need to properly secure that communication for traversal over the public internet. # Private Network Your Render services in the same region can communicate over their shared private network, _without_ traversing the public internet: [img] - Each [web service](web-services) and [private service](private-services) has a unique hostname on the private network. - These services can listen for private network traffic on _almost_ any port ([see below](#port-restrictions)) and use any protocol. - [Free web services](free#free-web-services) can _send_ private network requests, but they can't _receive_ them. - Each Render [Postgres](postgresql) and [Key Value](key-value) instance has an internal URL specifically for private network connections. - [Background workers](background-workers) and [cron jobs](cronjobs) can _send_ private network requests, but they can't _receive_ them. Private network communication is fast, safe, and reliable. It uses stable internal hostnames and IPs that dynamically map to individual instance addresses (which can change between deploys). [Direct IP-based communication](#direct-ip-communication-advanced) is also supported for advanced use cases. > *Need a private connection to a non-Render system?* > > See [Integrating with AWS PrivateLink](#integrating-with-aws-privatelink). ## Port restrictions Each service is limited to a maximum of 75 open ports. The following ports _cannot_ be used for private network communication: - `10000` - `18012` - `18013` - `19099` ## What's on my private network? [Static sites](static-sites) are _not_ on a private network. Other Render services are on the same private network if they're deployed in the same [region](regions) _and_ they belong to the same workspace. > With a [*Professional* workspace](professional-features) or higher, you can [block private network traffic](projects#blocking-cross-environment-traffic) from entering or leaving a particular environment. ## How to connect These service types each have an internal address or URL: - Web services - Private services - Render Postgres databases - Render Key Value instances This value is available from each service's *Connect* menu in the [Render Dashboard][dboard] (see the *Internal* tab): [img] Private services also display this value as their *Service Address*: [img] The private service above has the internal address `elasticsearch-2j3e:9200`. Other services _on the private network_ can communicate with it at this address. > *You might need to specify a service's expected protocol in its internal address string when you connect.* > > For example, you might need to specify `http://elastic-qeqj:10000` instead of just `elastic-qeqj:10000`. [Background workers](background-workers) and [cron jobs](cronjobs) _don't_ have an internal address, so they can't receive inbound private network traffic. However, they can _send_ requests to other service types on their private network. ## Integrating with AWS PrivateLink With a *Professional* workspace or higher, you can create secure, low-latency connections from your private network to compatible non-Render systems hosted on AWS: [img] Use a private link to connect to Snowflake, MongoDB Atlas, or resources in your own AWS VPC. For details, see [Private Link Connections](private-link). ## Direct IP communication (advanced) > *Use this method _only_ if [*hostname-based communication*](#how-to-connect) does not serve your use case.* For advanced use cases, you can send private network requests directly to the IP of a specific service instance. This is most commonly useful for [scaled](scaling) services in the following cases: - You need to message each running instance of your service individually (such as to pull metrics with a monitoring tool like Prometheus). - You want to implement custom load balancing logic for your service, instead of relying on Render's built-in load balancing. Each web service and private service has an associated *discovery hostname* that resolves to _all_ of its active instance IPs. By convention, this hostname has the format `[INTERNAL_HOSTNAME]-discovery` (e.g., `myapp-ne5j-discovery`). To find your service's internal hostname, see [How to connect](#how-to-connect). Each service exposes its discovery hostname to its own environment via the `RENDER_DISCOVERY_SERVICE` environment variable. If you manage your services via [Blueprints](infrastructure-as-code), you can also access _another_ service's discovery hostname (see [Referencing values from other services](blueprint-spec#referencing-values-from-other-services)). ### Example: Obtaining instance IPs The snippet below shows a JavaScript function that fetches all of a service's instance IP addresses via DNS lookup and prints them to the console. For other languages, use a supported DNS lookup library. > *Use a lookup API that relies on the underlying system's DNS resolver.* > > This ensures that your lookup applies necessary DNS configuration (such as rules defined in `/etc/resolv.conf`). ```js const dns = require('dns') // Obtain discovery hostname from environment variable const discoveryHostname = process.env.RENDER_DISCOVERY_SERVICE function fetchAndPrintIPs() { // Perform DNS lookup // all: true returns all IP addresses for the given hostname // family: 4 returns IPv4 addresses dns.lookup(discoveryHostname, { all: true, family: 4 }, (err, addresses) => { if (err) { console.error('Error resolving DNS:', err) return } // Map over results to extract just the IP addresses const ips = addresses.map((a) => a.address) console.log(`IP addresses for ${discoveryHostname}: ${ips.join(', ')}`) }) } ``` # Private Link Connections > *Private links require a Professional workspace or higher.* [See pricing.](pricing) You can create *private links* in your workspace to securely connect your infrastructure to non-Render providers hosted on AWS: [img] Use a private link to connect to: - AWS-hosted providers like Snowflake or MongoDB Atlas - Resources in your own AWS VPC, such as an EC2 instance or an Aurora database You create _same-region_ private links (e.g., Virginia-to-Virginia) directly in the Render Dashboard. ## Setup Creating a private link requires setup both in the Render Dashboard _and_ with the provider you're linking to: ### 1. Render 1. Open the [Render Dashboard][dboard]. 2. From your workspace home, select *Private Links* in the left pane. [img] 3. Click *Create Private Link*. The creation form appears: [img] 4. Copy the value of the *ARN Principal* field. - Some systems use this value to authorize the incoming private link connection. ### 2. External provider 1. Open your provider's dashboard. 2. Create a *VPC endpoint service* in the same region as the resource you're linking to. *This process varies by provider.* See guidance for popular providers in the tabs below: **MongoDB Atlas** #### MongoDB Atlas > *Your MongoDB Atlas cluster must be hosted on AWS.* 1. In the [Atlas UI](https://cloud.mongodb.com), select the project containing your cluster and open its *Network Access* page. 2. Select the *Private Endpoint* tab: [img] 3. Click *Add Private Endpoint*. The endpoint creation dialog appears. 4. Under *Cloud Provider*: - Select *AWS*. - Select the same region where your cluster is hosted. 5. Under *Interface Endpoint*, wait for your endpoint service to become ready: [img] 6. Close the endpoint creation dialog (you'll finish configuring the endpoint later). Your new endpoint appears in the *Private Endpoint* tab: [img] 7. Copy your new endpoint's *Atlas Endpoint Service* value. You'll provide this value to Render in the next step. **Snowflake** #### Snowflake As described in the [Snowflake documentation](https://docs.snowflake.com/en/user-guide/admin-security-privatelink#enabling-aws-privatelink), authorizing a managed cloud service like Render first requires contacting Snowflake support. In your message to Snowflake support: - Request a *VPC endpoint service* in the same AWS region as your Snowflake database. - Provide the *ARN Principal* value you copied in the Render Dashboard. - Request the name of the created VPC endpoint service. The service name resembles the following: ``` com.amazonaws.vpce.us-east-1.vpce-svc-abc123... ``` You'll provide the endpoint service name to Render in the next step. Also complete any additional actions indicated by Snowflake support. **Self-managed VPC (EC2, Aurora, etc.)** #### Self-managed VPC (EC2, Aurora, etc.) 1. Follow the steps in the AWS documentation to [create an endpoint service](https://docs.aws.amazon.com/vpc/latest/privatelink/create-endpoint-service.html#create-endpoint-service-nlb) in your VPC. - To simplify connecting later, disable the **Require acceptance for endpoint** option. - Your private link will only be able to access resources that are registered to the network load balancer (NLB) you apply to your endpoint service. 2. Follow the steps in the AWS documentation to [allow a principal](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html) for your endpoint service. - Provide the **ARN Principal** value you copied in the Render Dashboard. - By adding an allowed principal this way, your endpoint service rejects connections from other principals. 3. Copy the name of your new endpoint service. This value resembles the following: ``` com.amazonaws.vpce.us-east-1.vpce-svc-abc123... ``` You'll provide this value to Render in the next step. ### 3. Render 1. Return to the private link creation form in the [Render Dashboard][dboard]: [img] 2. Provide a **Name** and **Description** for your private link. - These values are for your team's reference only. 3. Provide the **VPC Endpoint Service Name** you obtained from your provider. This value resembles the following: ``` com.amazonaws.vpce.us-east-1.vpce-svc-abc123... ``` The *Region* field automatically populates based on the provided value. 4. Under *Access Policy*, choose either *Allow All* or *Limit to Selected Environments*: | Access Policy | Description | |--------|--------| | *Allow All* | All of your services hosted in the same region as the private link can access it. | | *Limit to Selected Environments* | You specify which of your [project environments](projects) can access the private link. A service can access the private link if _both_ of the following are true: - The service belongs to one of the selected environments. - The service is hosted in the same region as your private link. | 5. Click *Create Private Link*. Your browser redirects to your private link's details page: [img] For now, your private link has the status *Pending Acceptance*. 6. Copy your private link's *AWS ID*. You might need to provide this value to your provider in the next step. ### 4. External provider 1. Return to your provider's dashboard. 2. Finalize your connection according to your provider: **MongoDB Atlas** #### MongoDB Atlas 1. In the [Atlas UI](https://cloud.mongodb.com), return to the *Private Endpoint* tab for your project. 2. Click the *Edit* button for your endpoint. The in-progress endpoint creation dialog appears. 3. Advance to the *Finalize Endpoint Connection* tab: [img] 4. In the *Your VPC Endpoint ID* field, provide the *AWS ID* value you copied in the Render Dashboard. 5. Click *Create*. MongoDB Atlas begins deploying your finalized endpoint. When the deploy completes, your endpoint's status updates to available in both the Atlas UI and the Render Dashboard: [img] You're ready to start [connecting](#connecting-from-your-render-services) from your Render infrastructure. **Snowflake** #### Snowflake If required, contact Snowflake support to finalize your connection (such as by authorizing Render's incoming private link connection). When the connection is finalized, your private link's status updates to *Available* in the Render Dashboard: [img] You're ready to start [connecting](#connecting-from-your-render-services) from your Render infrastructure. **Self-managed VPC (EC2, Aurora, etc.)** #### Self-managed VPC (EC2, Aurora, etc.) If your endpoint service requires accepting incoming connections, follow the steps in the AWS documentation to [accept the incoming connection](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html#accept-reject-connection-requests) from Render. When the connection is finalized, your private link's status updates to *Available* in the Render Dashboard: [img] You're ready to start [connecting](#connecting-from-your-render-services) from your Render infrastructure. ## Connecting from your Render services After your private link is fully established, you can start connecting to your provider from your Render infrastructure. To connect to a particular resource, use its private connection URL from your provider: **MongoDB Atlas** #### MongoDB Atlas 1. In the [Atlas UI](https://cloud.mongodb.com), select your cluster and open its *Connect* dialog: [img] 2. Select the *Private Endpoint* connection type. 3. Select whichever connection method your Render service will use (usually a language-specific driver). All displayed methods will use the private connection URL accessible via your private link. 4. Apply the corresponding changes to your Render service and deploy. **Snowflake** #### Snowflake Your Render services can connect to Snowflake using your Snowflake *private connectivity URL*. For details, see the [Snowflake docs.](https://docs.snowflake.com/en/user-guide/organizations-connect#private-connectivity-urls) **Self-managed VPC (EC2, Aurora, etc.)** #### Self-managed VPC (EC2, Aurora, etc.) To connect to a particular resource (such as an EC2 instance or Aurora cluster): 1. In the AWS console, find the private DNS name or IP address of the resource, as registered with your endpoint service’s network load balancer (NLB). 2. Update your Render service’s configuration to use this private DNS name or IP address. 3. Deploy your Render service. ## Limitations - Private links require a *Professional* workspace or higher. - By default, a workspace can have up to three private links. - If you require additional private links, please [contact us](contact). - Private links support connections initiated _from_ your Render infrastructure _to_ an external provider, but not the reverse. - Your external provider must be hosted in an AWS VPC. - Your external provider must support creating a VPC endpoint service. - Certain Render customers might not be able to create private links in the Oregon region. - If you encounter this issue, please reach out to support in the [Render Dashboard](https://dashboard.render.com?contact-support). - You can currently only create private links in the same region as the VPC endpoint service you're linking to. - Your services in other regions cannot access the private link: [img] # Edge Caching for Web Services Render provides *edge caching* for static assets (documents, images, etc.) served by paid [web services](web-services). With edge caching enabled, you can speed up response times and reduce load on your web service: [img] Edge caching is powered by the same global CDN as Render [static sites](static-sites). ## Setup > *Edge caching is not available for [free web services](free).* 1. In the [Render Dashboard][dboard], open your web service's *Settings* page and scroll down to the *Edge Caching* section: [img] 2. Under *Cacheable file types*, click the dropdown and select an option: 3. After you select an option, a confirmation dialog appears. Review and *Confirm* the notices, then click *Save changes*. You're all set! Render begins caching your web service's [cache-eligible responses.](#cache-eligibility) ## How edge caching works Whenever a client sends an HTTP request to your cache-enabled web service, Render determines whether the request is eligible to serve from the edge cache. (For details, see [Cache eligibility](#cache-eligibility).) If the request is cache-eligible, Render checks the edge cache for the corresponding resource. - *If the requested resource is in the cache* (and the entry isn't stale), Render serves the cached version: [img] In this case, the request never reaches your web service. This speeds up the response and reduces load. - *Otherwise,* Render fetches the resource from your web service and—if it's _also_ cache-eligible—caches it for future requests: [img] ### Cache eligibility When serving a request from your web service, Render uses the following logic to determine whether the response can be cached for future requests: [diagram] \* See details below.

To summarize, *all* of the following must be true for a response to be cache-eligible: - The originating request *must* use the `GET` or `HEAD` HTTP method. - The requested resource *must* have a [cacheable file type](#cacheable-file-types) based on your settings. - The response *must either* include a `Cache-Control` header that allows caching *or* have a [default-cacheable status code](#default-cacheable-status-codes). - Additionally, the response *must not* include a `Set-Cookie` header. #### Cacheable file types When you [enable edge caching](#setup) for your web service, you select one of the following options for *Cacheable file types*: #### Default-cacheable status codes If your web service returns a cache-eligible response _without_ a [`Cache-Control` header](#setting-cache-control-headers), Render caches the response if it has one of the following status codes (and applies the corresponding default TTL): | Status code | Default TTL | | ------------- | ----------- | | 200, 206, 301 | 120 minutes | | 302, 303 | 20 minutes | | 404, 410 | 3 minutes | ### Invalidation and expiration To help ensure that clients receive up-to-date content, Render invalidates edge cache entries in the following scenarios: | *New deploys* | Each time you successfully deploy a new version of your web service, Render purges _all_ of the service's edge cache entries. This way, the cache doesn't serve stale content from the service's previous version. Render waits until all of the previous version's instances have shut down before purging the cache (learn more about [zero-downtime deploys](deploys#zero-downtime-deploys)). Failed deploys do _not_ trigger a purge. Purging the cache might briefly increase your web service's request volume, but only slightly. [See details.](#load-protection-on-cache-purge) | | *TTL expiration* | Each cache entry has a corresponding time-to-live (TTL). When an entry's TTL expires, the entry is considered stale. The next request for a stale entry is sent to your web service, which refreshes the entry. | | *Manual purge* | You can trigger a cache purge for your web service from its *Settings* page in the [Render Dashboard][dboard]: [img] As with a new deploy, this purges _all_ of your web service's associated edge cache entries. Purging the cache might briefly increase your web service's request volume, but only slightly. [See details.](#load-protection-on-cache-purge) | #### Load protection on cache purge Whenever you purge your web service's edge cache (either manually or by triggering a new deploy), Render's CDN automatically protects your web service from receiving a sudden influx of requests. If multiple clients request the same uncached resource, Render's CDN forwards only _one_ of those requests along to your web service. The CDN caches your web service's response, then serves it to all waiting clients. This pattern is called *request collapsing*. Your web service might experience a brief traffic increase after a cache purge, but thanks to request collapsing, the size of that increase is roughly equal to the number of _unique resources_ being requested. This is usually a small fraction of your service's total request volume. ## Setting `Cache-Control` headers You can customize Render's edge caching behavior for a particular resource by including a [`Cache-Control`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Cache-Control) (or `CDN-Cache-Control`) header in your web service's response: ```http Cache-Control: public, max-age=7200 ``` > **New to cache control headers?** > > Learn more about supported [response directives](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Cache-Control#response_directives). | Customization | Description | |--------|--------| | **Set a TTL (time-to-live)** | Do **both** of the following in your response's `Cache-Control` header: - Include the `public` directive. - Set the `max-age` or `s-maxage` directive to a value greater than `0`. `Cache-Control: public, max-age=3600` | | **Disable caching** | Do **either** of the following in your response's `Cache-Control` header: - Include the `no-cache`, `no-store`, or `private` directive. - Set the `max-age` or `s-maxage` directive to `0`. `Cache-Control: no-cache` | | **Set revalidation behavior** | Include a combination of the `must-revalidate`, `stale-while-revalidate`, and `stale-if-error` directives. `Cache-Control: stale-while-revalidate=60, stale-if-error=3600, public, max-age=1200` | ### Precedence rules Render applies the following precedence rules to cache control headers: - The `CDN-Cache-Control` header takes precedence over the `Cache-Control` header if both are present. - The `s-maxage` directive takes precedence over the `max-age` directive if both are present. ## Inspecting cache behavior Each response from a cache-enabled web service includes a `CF-Cache-Status` header: ```http CF-Cache-Status: HIT ``` The value of this header indicates whether the response interacted with the edge cache and in what way. The most common values are: | Value | Description | |--------|--------| | `HIT` | The response was served from the edge cache. | | `MISS` | The response was not found in the edge cache. It was served from your web service and cached if eligible. | | `DYNAMIC` | Some element of the incoming HTTP request was not [cache-eligible](#cache-eligibility), and the response was served from your web service. Most commonly indicates one of the following: - The request used an HTTP method other than `GET` or `HEAD`. - The requested resource did not have a [cacheable file type](#cacheable-file-types) based on your settings. | | `EXPIRED` | The response was found in the edge cache, but its TTL had expired. It was served from your web service and the edge cache was updated with the new response. | | `BYPASS` | The response was served directly from your web service and was _not_ stored in the edge cache, usually for one of the following reasons: - The response included a `Cache-Control` header that disabled caching. - The response did _not_ include a `Cache-Control` header, and it returned a status code that is not [default-cacheable](#default-cacheable-status-codes). - The response included a `Set-Cookie` header. | # WebSockets on Render The WebSocket protocol enables real-time, bi-directional data streaming between a client and server. It's commonly used for app features like text chat, financial dashboards, and AI voice assistants: Diagram showing WebSocket messages between a client and a server *Render [web services](web-services) can accept inbound WebSocket connections from the public internet.* Additionally, service types besides [static sites](static-sites) can initiate _outbound_ WebSocket connections over both the public internet and your [private network](private-network). ## Web service setup In your web service code, you usually extend your existing HTTP server framework with WebSocket support. For example, in Node.js it's common to use the `ws` module with the Express framework to enable WebSocket connections. See basic examples for some popular frameworks below, and consult your framework's documentation for additional details. **Express (Node.js)** This example uses Express along with the `ws` module: ```js:app.js const express = require('express') const { createServer } = require('http') const WebSocket = require('ws') const app = express() const server = createServer(app) const port = process.env.PORT || 10000 // Serves WebSocket connections at /ws (any path is fine) const wss = new WebSocket.Server({ server, path: '/ws' }) // HTTP routes app.get('/', (req, res) => { res.send('Hello over HTTP!') }) // WebSocket connections wss.on('connection', (ws) => { console.log('WebSocket client connected') ws.on('message', (message) => { console.log('Received:', message.toString()) ws.send(`Hello over WebSocket!`) }) }) server.listen(port, () => { console.log(`Server listening on port ${port}`) }) ``` **FastAPI (Python)** This example uses FastAPI's built-in WebSocket support: ```python:main.py from fastapi import FastAPI, WebSocket, WebSocketDisconnect import logging logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) app = FastAPI() # HTTP routes @app.get("/") async def root(): return {"message": "Hello over HTTP!"} # Serves WebSocket connections at /ws (any path is fine) @app.websocket("/ws") async def websocket_endpoint(websocket: WebSocket): await websocket.accept() logger.info("WebSocket client connected") try: while True: data = await websocket.receive_text() logger.info(f"Received: {data}") await websocket.send_text(f"Hello over WebSocket!") except WebSocketDisconnect: logger.info("Client disconnected") ``` **Django (Python)** This example uses the Django [Channels](https://channels.readthedocs.io/) framework. Note that you'll need to run your Django app with an ASGI-compatible server like [Daphne](https://github.com/django/daphne) or [Uvicorn](https://www.uvicorn.org/). ```python:routing.py from django.urls import path from . import consumers # Serves WebSocket connections at /ws (any path is fine) websocket_urlpatterns = [ path("ws", consumers.ExampleConsumer.as_asgi()), ] ``` ```python:consumers.py from channels.generic.websocket import AsyncWebsocketConsumer import json class ExampleConsumer(AsyncWebsocketConsumer): # Called when a client connects async def connect(self): await self.accept() # Called when a message is received from the client async def receive(self, text_data): # Send response back to this specific client await self.send(text_data=json.dumps({ "message": "Hello over WebSocket!" })) async def disconnect(self, close_code): # Cleanup when client disconnects pass ``` **Rails (Ruby)** This example uses the Rails [Action Cable](https://guides.rubyonrails.org/action_cable_overview.html) framework: ```ruby:app/channels/example_channel.rb class ExampleChannel < ApplicationCable::Channel # Called when a client connects def subscribed # Channel is ready to receive messages end # Called when a message is received from the client def receive(data) # Send response back to this specific client transmit({ message: "Hello over WebSocket!" }) end def unsubscribed # Cleanup when client disconnects end end ``` Action Cable serves WebSocket connections at `/cable` by default. ## Connecting from clients After you deploy WebSocket capabilities to your web service, you can start initiating connections from client code. To test quickly, you can install the [`websocat`](https://github.com/vi/websocat) command-line tool to connect directly from your terminal: ```shell{outputLines:2,4-5} brew install websocat websocat wss://example-app.onrender.com/ws test test Hello over WebSocket! ``` > **Always use the `wss` protocol for WebSocket connections over the public internet.** > > If you use `ws`, most WebSocket clients fail when Render responds to their "handshake" request with a 301 code (attempting to redirect to TLS). > > For local server testing and connections over your [private network](private-network), use `ws`. Here's a simple Node.js client that connects to a Render-hosted WebSocket server: ```js:client.js const WebSocket = require('ws') const ws = new WebSocket('wss://example-app.onrender.com/ws') // highlight-line ws.onopen = () => { ws.send('Hello from the client!') } ws.onmessage = (event) => { console.log('Received:', event.data) } ``` Regardless of your language or framework, all you need to do is specify your web service's public URL ([custom domains](custom-domains) work great), including the path for your WebSocket server. ## Maintaining connections Render does not enforce a maximum duration for WebSocket connections. However, a variety of factors can cause a connection to be interrupted (instance shutdowns, network issues, platform maintenance, and so on). To maintain active connections and detect stale ones, your web service and its connected clients should periodically send each other keepalive messages. The [WebSocket protocol](https://datatracker.ietf.org/doc/html/rfc6455#section-5.5.2) defines special `ping` and `pong` control frames specifically for this purpose. When one side sends a `ping`, the other side should respond with a `pong`: Diagram showing WebSocket ping and pong messages Many WebSocket libraries automatically handle `pong` responses, so you usually only need to implement `ping` logic on each side. ### Server-side pings On the server side, periodic pings help you detect stale connections as early as possible. This helps you free up resources to maintain performance. The example below extends the earlier [Express example](#web-service-setup) to add a basic "heartbeat" using `ping`. The same concepts apply to other languages and frameworks. ```js:app.js const express = require('express') const { createServer } = require('http') const WebSocket = require('ws') const app = express() const server = createServer(app) const port = process.env.PORT || 10000 const wss = new WebSocket.Server({ server, path: '/ws' }) // Called for a connection whenever client responds with a pong function heartbeat() { this.isAlive = true } wss.on('connection', function connection(ws) { ws.isAlive = true ws.on('error', console.error) ws.on('pong', heartbeat) ws.on('message', (message) => { console.log('Received:', message.toString()) ws.send('Hello over WebSocket!') }) }) // Ping all connected clients every 30 seconds const interval = setInterval(function ping() { wss.clients.forEach(function each(ws) { // Close connections that failed to "pong" the previous ping if (ws.isAlive === false) return ws.terminate() ws.isAlive = false ws.ping() }) }, 30000) // Standard shutdown logic wss.on('close', function close() { clearInterval(interval) }) server.listen(port, () => { console.log(`Server listening on port ${port}`) }) ``` _Adapted with appreciation from the [`ws` README](https://github.com/websockets/ws?tab=readme-ov-file#how-to-detect-and-close-broken-connections)_ ### Client-side reconnects Your clients should include logic to reconnect to your service in the event of an interruption. This logic should account for both "graceful" disconnects (such as your service closing the connection due to an [instance shutdown](#handling-instance-shutdown)) and unexpected errors (such as the connection becoming stale due to a network issue). Reconnection logic should use **exponential backoff** to avoid overwhelming the server if it's in a degraded state. > **Clients are not guaranteed to reconnect to the same instance after a disruption.** > > Render's load balancer assigns each incoming WebSocket connection to a random instance of your service, regardless of past connection history. The longer example below demonstrates client-side reconnection logic with exponential backoff in Node.js. The same concepts apply to other languages and frameworks. ```js:client.js const WebSocket = require('ws') const wsUrl = 'wss://example-app.onrender.com/ws' let ws = null let reconnectAttempts = 0 const maxReconnectAttempts = 10 const baseBackoffDelay = 1000 // Start with 1 second backoff delay let pingInterval = null let pongTimeout = null // Reusable connect function to call from reconnection logic function connect() { ws = new WebSocket(wsUrl) ws.on('open', () => { console.log('Connected to server') reconnectAttempts = 0 // Reset on successful connection startPinging() }) ws.on('message', (data) => { console.log('Received:', data.toString()) }) ws.on('pong', () => { // Server responded, connection is not stale clearTimeout(pongTimeout) }) ws.on('close', (code, reason) => { console.log(`Connection closed: ${code} ${reason}`) cleanup() handleReconnect() }) ws.on('error', (error) => { console.error('WebSocket error:', error.message) // The 'close' event fires after this, triggering reconnect }) } // Initializes 30-second ping interval to detect stale connections function startPinging() { pingInterval = setInterval(() => { if (ws.readyState === WebSocket.OPEN) { ws.ping() // If no pong response within 10 seconds, terminate stale connection pongTimeout = setTimeout(() => { console.log('No pong received, terminating stale connection') ws.terminate() // Force close, triggering reconnect }, 10000) } }, 30000) } // Defines reconnection logic with exponential backoff function handleReconnect() { if (reconnectAttempts >= maxReconnectAttempts) { console.error('Max reconnection attempts reached') return } reconnectAttempts++ // Exponential backoff: 1s, 2s, 4s, 8s, etc. (max 60 seconds) const delay = Math.min(baseBackoffDelay * Math.pow(2, reconnectAttempts - 1), 60000) console.log(`Reconnecting in ${delay}ms (attempt ${reconnectAttempts})`) setTimeout(connect, delay) // Reattempts connection after specified delay } function cleanup() { clearInterval(pingInterval) clearTimeout(pongTimeout) } connect() // Start the initial connection ``` ### Handling instance shutdown Render periodically swaps out your web service's running instances. This occurs most commonly when you [deploy a new version](deploys#zero-downtime-deploys) of your service, and it also happens as part of standard [platform maintenance](platform-maintenance). As part of shutting down an instance, Render sends it a `SIGTERM` signal and gives it a 30-second window to [shut down gracefully](deploys#graceful-shutdown). You can extend this window to a maximum of 300 seconds by [setting a shutdown delay](deploys#setting-a-shutdown-delay). > **Does your use case require a shutdown delay longer than 300 seconds?** > > Please reach out to our support team in the [Render Dashboard.](https://dashboard.render.com?contact-support) During the shutdown window, you can gracefully close any open WebSocket connections and optionally send clients a message specific to this scenario. You can also [save any relevant session state](#saving-session-state). ### Saving session state If a WebSocket connection is interrupted and your service instance has been storing state relevant to that client, you can save that state to a [Render Key Value](key-value) instance or other shared storage: [diagram] This way, if the client reconnects, whichever instance it connects to can fetch the saved state and resume the session. If you use this pattern, you can set a TTL for saved session state to automatically invalidate it after it's no longer needed. ## FAQ ###### Can I receive WebSocket connections on a different port from other HTTP requests? **Not over the public internet.** All public internet traffic to your web service is routed to a single port (the default port is `10000`). This includes WebSocket connections, which start as HTTP requests that are then upgraded. You _can_ receive WebSocket connections on a different port over your [private network](private-network). However, this is limited to connections from your other Render services in the same region. ###### How long can a WebSocket connection stay open? *Until the connected instance shuts down.* Render doesn’t impose a fixed timeout for WebSocket connections, but they close automatically when the instance is replaced (for example, during a deploy). For details, see [Maintaining connections.](#maintaining-connections) ###### Do WebSocket messages count as outbound bandwidth usage? *Some of them do.* _Outbound_ WebSocket messages from your services over the public internet count as [outbound bandwidth](outbound-bandwidth) usage. Inbound messages and private network connections do _not_ count as outbound bandwidth usage. ###### Is there a limit on the number of open WebSocket connections a service can have? *No.* However, a large number of connections can strain your instance's compute resources, resulting in degraded performance. To handle more connections, you can upgrade to a larger instance type or scale to multiple instances: - Larger instance types have more RAM and CPU, which enables each instance to handle more connections. - [Scaling your service horizontally](scaling) enables you to distribute connections across multiple instances, reducing the load on each. - When a client initiates a WebSocket connection, Render's load balancer assigns it to one of your service's instances at random. # Outbound Bandwidth > *On August 1, 2025, we lowered bandwidth pricing and expanded the types of traffic that are billed.* > > This article reflects the new pricing model. For more information about these changes, see the [blog post](blog/new-bandwidth-pricing-on-render). Render tracks the amount of network traffic sent from your workspace's services to destinations outside of Render. This traffic is billed as *outbound bandwidth* usage. Each month, your workspace receives an included amount of outbound bandwidth based on its [plan](pricing). If your services exceed this amount, Render bills your workspace for an additional 100 GB of bandwidth. Unused bandwidth does not roll over to the next month. ## Billed traffic All outbound traffic from Render services to the public internet is billed as outbound bandwidth usage. To understand which traffic is billed, see the diagrams and table below: [img] [img] | Traffic type | Billed? | |--------|--------| | HTTP responses sent from your [web services](web-services) and [static sites](static-sites) to browsers and other clients over the public internet | ☑️ | | WebSocket responses sent from your [web services](web-services) to browsers and other clients over the public internet | ☑️ | | **Service-initiated:** Network communication initiated by any Render service or [one-off job](one-off-jobs) over the public internet | ☑️\* Same-region traffic to Amazon S3 or Google Cloud Storage is not billed as outbound bandwidth usage. | | **Service-initiated:** Query responses from [Render Postgres](postgresql) and [Render Key Value](key-value) datastores to a destination outside of Render | ☑️ | | Traffic to a non-Render resource over a [private link connection](private-network#integrating-with-aws-privatelink) | ☑️\* This traffic is billed at a significantly lower rate than other outbound traffic. | | Private network traffic between Render services in the same region | ➖ | | Inbound traffic from any source to your Render services | ➖ | ## Pricing Outbound bandwidth is billed at *$15 per 100 GB* beyond your workspace's monthly included amount. Traffic sent over a [private link connection](private-link) is billed at a significantly lower rate than other outbound traffic. Each workspace receives a monthly included amount of outbound bandwidth based on the workspace's plan: | Workspace plan | Included bandwidth | |--------|--------| | *Hobby* | 100 GB | | *Professional* | 500 GB | | *Organization* | 1 TB | | *Enterprise* | Custom | Many workspaces never exceed their included amount, which means they aren't charged for outbound bandwidth at all. ## FAQ ###### Why has Render made changes to outbound bandwidth pricing? The recent update brings our pricing model in line with industry norms and helps us accurately reflect real infrastructure usage. This change enables us to maintain reliable, high-performance service across the platform while building a sustainable business. ###### Have these changes increased my monthly costs? *For almost all workspaces, no.* However, your bandwidth costs might increase if: - Your services send a high volume of traffic over the internet to externally hosted APIs and datastores - Your web services send a high volume of WebSocket messages to connecting clients Customers with a significant bill increase were notified and will receive bandwidth support credits for August and September to give them time to optimize and adjust usage. ###### What happens if I exceed my monthly included amount of outbound bandwidth? - *If you've linked a payment method,* Render bills you for an additional 100 GB of bandwidth. - Unused bandwidth does not roll over to the next billing period. - *Otherwise,* Render spins down your workspace's services until the start of the next month. ###### How do I monitor my outbound bandwidth usage? - View an individual service's recent usage from its *Metrics* page in the [Render Dashboard][dboard]. - The *Outbound Bandwidth* graph displays the service's usage broken out by traffic type. [See details.](service-metrics#outbound-bandwidth) - View your workspace's total monthly usage from the [Billing page](https://dashboard.render.com/billing) in the Render Dashboard. # Fully Managed TLS Certificates All applications and static sites hosted on Render come with *fully managed and free TLS* certificates. There is no setup and you don't need to do anything at all; everything just works out of the box. Render uses Let's Encrypt and Google Trust Services to issue certificates for your custom domain and *automatically renews* them before their expiration date. You get free TLS certificates for the `onrender.com` subdomain for your service, as well as the [custom domains](custom-domains) you add to it, including *wildcard domains*. Finally, Render automatically redirects all `HTTP` requests to `HTTPS` so your users' security is never compromised. # Custom Domains on Render > *Hobby workspaces support a maximum of two custom domains across all services.* > > Professional workspaces and higher support unlimited custom domains. You can apply your own custom domains to Render [web services](web-services) and [static sites](static-sites). Services with a custom domain also keep their `onrender.com` subdomain. Render automatically creates and renews TLS certificates for all custom domains, including [wildcard domains](#wildcard-domains). All HTTP traffic to a custom domain is automatically redirected to HTTPS. Apply a custom domain in three steps: 1. [Add your domain in the Render Dashboard](#1-add-your-domain-in-the-render-dashboard). 2. [Configure DNS](#2-configure-dns-with-your-provider) with your domain's provider. 3. [Verify your domain](#3-verify-your-domain) in the Render Dashboard. These steps are detailed below. ## 1. Add your domain in the Render Dashboard 1. In the [Render Dashboard][dboard], select the service that will use your custom domain. - Only web services and static sites support custom domains. 2. Open the service's *Settings* page and scroll down to the *Custom Domains* section: [img] 3. Click *+ Add Custom Domain* and provide your custom domain name. > *If your domain includes Unicode characters,* first convert it to Punycode with a tool like [Punycoder](https://www.punycoder.com/). > > For example, you would provide `ëxample.com` as `xn--xample-ova.com`. 4. Click *Save*. Your custom domain now appears in the list: [img] - *If you add a `www` subdomain* (e.g., `www.example.org`), Render automatically adds the corresponding root domain and redirects it to the `www` subdomain. This is shown in the screenshot above. - *If you add a root domain* (e.g., `example.org`), Render automatically adds the corresponding `www` subdomain and redirects it to the root domain. *Your domain does not yet point to your service!* Next, you'll [configure DNS](#2-configure-dns-with-your-provider). ## 2. Configure DNS with your provider > *Remove any `AAAA` records from your domain while configuring DNS.* > > `AAAA` records map to an IPv6 address, and Render uses IPv4. These records can cause unexpected behavior for your custom domain. 1. Log in to your custom domain's DNS provider (such as Cloudflare, Namecheap, or GoDaddy). 2. Navigate to the DNS settings for your domain. 3. Add DNS records for your domain based on your provider, then return here: - [Cloudflare](configure-cloudflare-dns) - [Namecheap](configure-namecheap-dns) - [All other providers](configure-other-dns) > *If you're adding a wildcard domain,* see [additional instructions](#wildcard-domains). 4. Return to the Render Dashboard and [verify your domain](#3-verify-your-domain). ## 3. Verify your domain 1. Return to your service's *Custom Domains* settings in the [Render Dashboard][dboard]: [img] 2. Click the *Verify* button next to your custom domain. - If verification fails, your DNS settings might not have propagated yet. Wait a few minutes and try again. See also [Speeding up domain verification](#speeding-up-domain-verification). 3. If verification succeeds, Render issues a TLS certificate for your domain and updates the verification status: [img] 4. Try visiting your custom domain in a browser. - If you see a *502 Bad Gateway* error, Render might still be updating routing rules for your service. Wait a few minutes and try again. 5. When your custom domain loads successfully, you're good to go! ### Speeding up domain verification We recommend removing cached entries in public DNS servers after updating your DNS records. This is especially important if you're updating nameservers for your domains. Clearing the cache will speed up DNS verification and TLS certificate issuance for your domains. Use the links below to clear cached records in popular public DNS servers: - [Flush Google Public DNS Cache](https://developers.google.com/speed/public-dns/cache) - [Purge Cloudflare Public DNS Cache](https://1.1.1.1/purge-cache/) - [Refresh OpenDNS Cache](https://cachecheck.opendns.com/) ## Disabling your `onrender.com` subdomain If your service has at least one custom domain, you can disable the service's default `onrender.com` subdomain. After you do, your service is reachable exclusively at its custom domain(s). 1. In the [Render Dashboard][dboard], select the service that you want to disable the `onrender.com` subdomain for. 2. Open the service's *Settings* page and scroll down to the *Custom Domains* section: [img] 3. Toggle the *Render Subdomain* setting to *Disabled* and confirm. After you disable your `onrender.com` subdomain, all requests to it receive a 404 response. These requests do _not_ reach your service. You can re-enable the subdomain at any time. ## Advanced DNS configuration ### Wildcard domains You can apply a wildcard domain (e.g., `*.example.org`) to a Render service to point all matching subdomains (`docs.example.org`, `blog.example.org`, etc.) to it: [img] This configuration requires setting _three_ `CNAME` DNS records with your provider: | Name | Value | |--------|--------| | `*` | Your service's `onrender.com` subdomain, available in the [Render Dashboard][dboard]. *Example:* `svelte.onrender.com` | | `_acme-challenge` | Has the format `[your-service-id].verify.renderdns.com` Enables Render to manage certificate issuance and renewal for your wildcard domain via Let's Encrypt. *Example:* `svelte.verify.renderdns.com` | | `_cf-custom-hostname` | Has the format `[your-service-id].hostname.renderdns.com` Enables Cloudflare (Render's DDoS protection provider) to validate ownership of your wildcard domain. *Example:* `svelte.hostname.renderdns.com` | #### Using Cloudflare DNS with wildcard domains If you manage your custom domain with Cloudflare DNS, note the following: If you add a wildcard domain (e.g., `*.example.com`) to Render but _not_ the corresponding root domain (e.g., `example.com`), using Cloudflare with [proxying enabled](https://community.cloudflare.com/t/what-is-the-proxied-feature-in-cloudflare/32887) (orange cloud) will cause traffic for the root domain to be sent to the _same Render destination_ as your wildcard domain. To prevent service disruptions, make sure to _disable_ proxying for your root domain (gray cloud). If you have any questions, please get in touch at support@render.com. ### CAA records > If your custom domain doesn't define any `CAA` records, you can ignore this section. If your custom domain defines [`CAA` records](https://en.wikipedia.org/wiki/DNS_Certification_Authority_Authorization), make sure to define records for Render's certificate authorities: - Let's Encrypt (`letsencrypt.org`) - Google Trust Services (`pki.goog; cansignhttpexchanges=yes`) Additionally, if you add a [wildcard domain](#wildcard-domains), make sure to define corresponding `issuewild` records for each authority. ``` example.com IN CAA 0 issue "letsencrypt.org" example.com IN CAA 0 issuewild "letsencrypt.org" example.com IN CAA 0 issue "pki.goog; cansignhttpexchanges=yes" example.com IN CAA 0 issuewild "pki.goog; cansignhttpexchanges=yes" ``` # Configuring Cloudflare DNS > This guide assumes you've already added a custom domain from Cloudflare to your service in the [Render Dashboard][dboard]. If you haven't done this yet, first read [Custom Domains](custom-domains). ## Common setup Most commonly when setting up a Cloudflare custom domain, you add a CNAME record for the root domain, along with another for the `www` subdomain. > *Remove all `AAAA` records for your domain if it has any.* `AAAA` records map a domain to a corresponding IPv6 record, but Render does not yet support IPv6 addresses. As a result, `AAAA` records can interfere with your custom domain's behavior on Render. 1. Log in to your Cloudflare dashboard and select your domain from the Home page to open its settings. 2. Navigate to *SSL/TLS > Overview*. Set your encryption mode to *Full*: [img] 3. Navigate to *DNS > Records* and click *Add record*. 4. Create a new CNAME record that points to your Render service's `onrender.com` subdomain (obtain this value in the [Render Dashboard][dboard]): [img] - Set *Name* to `@` to specify your root domain (next you'll add a separate record for `www`). - Set *Target* to your Render service's subdomain (e.g., `example.onrender.com`). - Set *Proxy status* to *DNS only*. This ensures that requests go to Render instead of Cloudflare, so that we can verify the domain and issue a certificate. Click *Save*. 5. Repeat the previous step to create a _second_ CNAME record. - This time, set *Name* to `www` and provide the same values as before for *Target* and *Proxy status*. Click *Save*. 6. Your completed configuration should resemble this: [img] That's it! DNS changes might take a few minutes to propagate, after which your domain points to your Render service. You can check the status of your service's certificates and manually request verification in the [Render Dashboard][dboard] under *Custom Domains*: [img] After the Render Dashboard indicates that your certificates are issued and valid, you can optionally set *Proxy status* to *Proxied* for your CNAME records. ## Adding a wildcard custom domain without the base domain Your Cloudflare domain requires some additional configuration if _all_ of the following are true: - You're adding a wildcard custom domain (e.g., `*.example.com`) to your Render service. - You are _not_ adding the corresponding _base_ domain (e.g., `example.com`) to your service. - You've [enabled proxying](https://community.cloudflare.com/t/what-is-the-proxied-feature-in-cloudflare/32887) for your base domain (i.e., *Proxy status* is set to *Proxied*). ### Origin override with a Cloudflare Worker To direct wildcard traffic to Render while directing base domain traffic elsewhere, you can use a Cloudflare Worker to perform an [origin override](https://developers.cloudflare.com/workers/examples/bulk-origin-proxy). *The instructions below assume the following:* - You have the custom Cloudflare domain `example.com`. - You want your Render web service `example.onrender.com` to serve traffic for `*.example.com` - You want `base-domain-origin.com` to serve traffic for `example.com`. #### 1. Add a DNS record pointing to base-domain-origin.com [img] #### 2. Create a Worker 1. Navigate to *Workers* -> *Overview* -> *Create Service* 2. Name your service `base-domain-override`, select *HTTP Handler*, and click *Create service* [img] 3. Scroll down and click *Quick Edit*. 4. Add the following configuration. Replace `example.com` with your custom domain and make sure the `base-domain-origin` subdomain matches the DNS record you created in the first step. ```javascript addEventListener('fetch', (event) => { event.respondWith(handleRequest(event.request)) }) async function handleRequest(request) { return fetch(request, { cf: { resolveOverride: 'base-domain-origin.example.com' }, }) } ``` [img] 5. Click *Save and Deploy* -> Navigate back to the Worker overview page -> Click *Triggers* -> *Add Route* 6. Add a route matching your base domain and click *Add Route*: [img] 7. Finally, add CNAME records for both your base domain and wildcard domain pointing to your onrender subdomain. Pointing your base domain to Render is required for an [orange to orange setup](https://community.cloudflare.com/t/the-orange-to-orange-problem/91864). With this configuration, Cloudflare will send traffic to your zone first. The Worker that you just set up will match the route and trigger an origin override, so traffic for the base domain will not get sent to Render. If you do not do this, Cloudflare will send the traffic directly to Render's zone and the Worker you set up wil have no effect. [img] Your wildcard traffic should now be directed to Render and your base domain traffic directed to the origin you specified. If you have any questions, you can get in touch with us at support@render.com. # Configuring Namecheap DNS > This guide assumes you've added your domains to the corresponding Render service. If you haven't done this yet, follow the steps to add custom domains to your service. To configure Namecheap for custom domains, we need to create A records for root custom domains and CNAME records for non-root domains (`www` or any other subdomains). In this guide, we'll configure Namecheap for `example.com` and `www.example.com`. > Make sure to remove any existing `AAAA` records for your domains when you update your DNS settings. `AAAA` records map a domain to a corresponding IPv6 record, but Render does not support IPv6 addresses yet. As a result, `AAAA` records can interfere with Render hosting your custom domains. 1. Log into Namecheap. You'll see your custom domain in the dashboard. [img]
2. Click on the *MANAGE* button on the right and select the *Advanced DNS* tab. 3. Remove any existing `A` records for `@` and click on `Add New Record`. Add an `A` record for host `@` pointing to Render's load balancer IP `216.24.57.1`. We recommend setting the TTL to 1 minute so we can verify the domain faster. [img]
4. Remove any existing `CNAME` or Redirect records for `www` and click on `Add New Record`. Add a `CNAME` record for host `www` pointing to your Render subdomain which looks like `example.onrender.com`. Again, set the TTL to 1 minute. [img]
The final configuration should look something like this: [img] That's it! DNS changes can take a few minutes to propagate, but once they do you should be all set. # Configuring DNS Providers > This guide assumes you've added a custom domain to your service in the Render Dashboard. If you haven't yet, first complete [this step](custom-domains#1-add-your-domain-in-the-render-dashboard). This article explains how to configure your DNS provider to point your [custom domain](custom-domains) to Render. Some of these steps might not apply to your provider. We have specific guides for popular DNS providers: - [Configuring Cloudflare DNS](configure-cloudflare-dns) - [Configuring Namecheap DNS](configure-namecheap-dns) We're also happy to help you set things up—just contact us via the Help link in the dashboard. > *Remove any `AAAA` records from your domain while configuring DNS.* > > `AAAA` records map to an IPv6 address, and Render uses IPv4. These records can cause unexpected behavior for your custom domain. ## Configuring root domains ### Using an `ANAME` or `ALIAS` record When you're pointing a root domain like `example.com` to your Render subdomain, you can use `ANAME` or `ALIAS` records if your DNS provider supports them. These records are not part of the standard DNS protocol but are implemented by some providers to make it easy to point root domains to other domains. [DNSimple](https://dnsimple.com/), [DNS Made Easy](https://dnsmadeeasy.com/), [Name.com](https://www.name.com/) and [NS1](https://ns1.com/) all support one or both of these record types. - `ANAME` records let you refer to other domains just like `CNAME` records, but behave like `A` records in that they ultimately resolve to an IP address. This is also often called [CNAME flattening](https://support.cloudflare.com/hc/en-us/articles/200169056-Understand-and-configure-CNAME-Flattening). Read more [here](https://dnsmadeeasy.com/services/anamerecords/). - `ALIAS` records map a root domain to another domain while coexisting with other record types for the root domain. Read more [here](https://support.dnsimple.com/articles/alias-record/). To configure your custom root domain for Render, add an `ANAME` or `ALIAS` record for your root domain to point to your app's Render subdomain. For example, if your app subdomain is `example.onrender.com` and your custom domain is `example.com`, you should add an `ANAME` or `ALIAS` record for `example.com` and point it to `example.onrender.com`. ### Using an `A` record If your DNS provider does not support `ANAME` or `ALIAS` records or `CNAME` flattening, you will need to add an `A` record to point to your Render app. `A` records point to IP addresses, and you can use `216.24.57.1` to point your root domain to Render's load balancer IP. Once you have made changes to your DNS records, these need to propagate across the internet - this can delay the verification process. Use the `dig` command or an online service like [dnschecker](https://dnschecker.org/) to verify the correct response. If you see additional values in the DNS response other than those provided by Render, your DNS provider may have some defaults or features (e.g., domain forwarding) that need to be removed/disabled. > If you are using Cloudflare as a DNS provider, you must use a CNAME record instead of an A record. See configuring Cloudflare DNS for instructions. ## Configuring `www` and other subdomains For non-root domains, you should always add a `CNAME` record pointing to your app's Render subdomain. For example, if your Render subdomain is `example.onrender.com` and your custom domain is `www.example.com`, you should add a CNAME record for `www` and point it to `example.onrender.com`. # Outbound IP Addresses > *Outbound IPs were recently updated to use new ranges for each region.* > > [See details.](#changes-to-outbound-ips) Render services send outbound traffic through specific sets of IP ranges depending on their [region](regions). You can use these ranges to connect your service to IP-restricted environments outside of Render. Each IP range uses [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation). As an example, the range `216.24.60.0/24` represents the IP addresses from `216.24.60.0` to `216.24.60.255`, inclusive. Your service might use _any_ IP address within its associated ranges. Outbound IP ranges are shared across _all_ services in the same region. > *Interested in unique, Render-native static IPs for your workspace?* > > - Please upvote [this feature request](https://feedback.render.com/features/p/exclusive-not-shared-static-outbound-ips). > - To ensure unique outbound IPs at this time, you can configure a static IP provider like [QuotaGuard](quotaguard). ## Obtaining your outbound IPs *To obtain a service's outbound IP ranges:* 1. Open the [Render Dashboard][dboard]. 2. Click a service to open its details page. 3. Open the *Connect* dropdown in the upper right. 4. Switch to the *Outbound* tab. Copy the list of IP ranges: [img] *Don't see the Outbound tab?* - Make sure you're viewing the details page for a particular service, not your workspace home. - Note that [static sites](static-sites) don't use outbound IP addresses, because they can't initiate outbound traffic. - If you created your workspace before *January 23, 2022*, [see below](#exception-for-some-oregon-services). ### Exception for some Oregon services *For workspaces created before January 23, 2022,* services in the Oregon [region](regions) do _not_ use a fixed set of outbound IP addresses. This remains the case after recent [changes to outbound IPs](#changes-to-outbound-ips). You can configure outbound IP addresses for these Oregon-region services in one of the following ways: - Configure a static IP provider like [QuotaGuard](quotaguard). - Create a _new_ workspace, then create replacement Oregon-region services in that workspace. Migrate over any data, domains, and configuration. ## Changes to outbound IPs On *November 13, 2025*, Render completed a migration to new outbound IP ranges for each [region](regions). Prior the migration, each region used a different, fixed set of individual IP addresses. Those individual addresses are now retired. If you use your service's outbound IPs to authorize access to an external system, make sure to add the new IP ranges to that system's access rules: 1. Open your service's settings in the [Render Dashboard][dboard] and click the *Connect* dropdown in the upper right. 2. Switch to the *Outbound* tab to view your service's new IP ranges: [img] *Don't see the Outbound tab?* - Make sure you're viewing the details page for a particular service, not your workspace home. - Note that [static sites](static-sites) don't use outbound IP addresses, because they can't initiate outbound traffic. - If you created your workspace before *January 23, 2022*, [see above](#exception-for-some-oregon-services). 3. Configure your external system to allow traffic from the listed ranges. Your service might connect from any address within these ranges. 4. If your external system allows connections from any of Render's original IP addresses, you can safely remove those addresses from your access rules. ## FAQ ###### What if my external system doesn't support allowlisting an entire CIDR range? To limit your service's outbound traffic to a smaller number of IP addresses, you can configure a static IP provider like [QuotaGuard](quotaguard). If you do, your service's outbound traffic will flow through your provider-managed IP(s), which you can then allow in your external system. > *Interested in unique, Render-native static IPs for your workspace?* > > Please upvote [this feature request](https://feedback.render.com/features/p/exclusive-not-shared-static-outbound-ips). # Inbound IP Rules You can configure which IP addresses can connect to your Render services over the public internet: [img] By setting these inbound IP rules, you can grant access only to IP ranges you trust. *All workspaces* can set inbound IP rules for: - Individual [Render Postgres](postgresql-creating-connecting#restricting-external-access) and [Key Value](key-value#enabling-external-connections) datastores *Enterprise orgs* can also set rules for: - Individual web services and static sites - An entire [environment](projects) - An entire workspace After you set IP rules, Render only allows inbound service connections from the IP ranges you specify. Disallowed IPs are automatically blocked with a `403 Forbidden` response. Requests are blocked at the edge and do not reach your service. For web services, blocked requests _do_ still appear in [HTTP request logs](logging#http-request-logs). > *Inbound IP rules apply only to connections from the public internet.* > > These rules do not apply to inter-service communication over your [private network](private-network). For private network controls, see [Blocking cross-environment traffic](projects#blocking-cross-environment-traffic). ## Setup ### Render Postgres / Key Value All workspaces can set inbound IP rules for Render Postgres and Key Value. See the documentation for each type of managed datastore: - [Render Postgres](postgresql-creating-connecting#restricting-external-access) - [Render Key Value](key-value#enabling-external-connections) ### All other resource types > *Setting IP rules for any resource besides a [managed datastore](#render-postgres--key-value) requires an Enterprise org.* Follow these steps to set inbound IP rules for a web service, static site, environment, or workspace. 1. In the [Render Dashboard][dboard], open the settings page for the service or environment you want to configure. - For workspace-level rules, click *Network Access* in the left pane of your workspace home. 2. Scroll down to the *Networking* section and find *Inbound IP Restrictions*: [img] If you haven't made any changes yet, you'll see a single default rule: `0.0.0.0/0` (allow all IPs). 3. Click an existing rule to edit it, or click *+ Add source* to create a new rule. You can also click the trash icon next to a rule to delete it. - Learn more about [rule format](#rule-format) below. > *If you delete all rules, Render blocks _all_ inbound traffic to the affected service(s)!* 4. Click *Save* to apply your changes. You're all set! Your new rules take effect within a few seconds. ## Rule format Inbound IP rules are allowlists of IP ranges you specify using [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation). Render allows connections from the IP ranges you specify and blocks connections from all other IPs. > *Only IPv4 CIDR ranges are supported.* Here are some example ranges: | Rule | Meaning | |--------|--------| | `0.0.0.0/0` | Allow any IP address. This is the default rule for services besides [Render Key Value instances.](key-value#enabling-external-connections) | | `203.0.113.0/24` | Allow IP addresses `203.0.113.0` through `203.0.113.255`. This might represent a company's office network or a trusted third-party service. | | `198.51.100.16/29` | Allow IP addresses `198.51.100.16` through `198.51.100.23`. This smaller range of 8 addresses might represent a specific subnet of an office network. | | `203.0.113.42/32` | Allow _only_ the IP address `203.0.113.42`. The `/32` suffix limits the rule to the single defined IP address. This is useful for allowing only your own development machine. | ## Combining IP rules > *This section applies only to Enterprise orgs.* When a service is subject to IP rules at multiple levels (workspace, environment, and/or service), an inbound IP must be allowed by _each level's_ rules to successfully connect: IP rules flowchart If an IP is allowed at the service level but disallowed at the environment level (or vice versa), the connection is blocked. When you view a service's inbound IP rules, you can see each level of rules that apply to it: [img] ## FAQ ###### Can I set IP rules for a private service / background worker / cron job? No. These service types never receive traffic from the public internet. ###### How do I allow connections from all IP addresses? Include the value `0.0.0.0/0` (CIDR notation for "any IPv4 address") in your inbound IP rules. This is the default rule for services besides [Render Key Value instances.](key-value#enabling-external-connections) ###### What happens if I delete all rules for a given resource? If you delete all inbound IP rules for a given service, environment, or workspace, Render blocks _all_ inbound traffic to the affected service(s). ###### Why don't I see inbound IP rule settings for my web service? Inbound IP rules require an [Enterprise org](enterprise-orgs) for everything besides [managed datastores.](#render-postgres--key-value) # The Render Dashboard The Render Dashboard is the web interface for managing everything in your Render workspace—services, team members, billing, and more: [img] Your dashboard's main page lists the services in your workspace, along with any [projects](projects) you've organized them into. Click any service to view its details, logs, and settings. Use the left panel to jump to views for your [Blueprints](infrastructure-as-code) and [environment groups](configure-environment-variables#environment-groups). This article describes some common dashboard actions to get you up and running. *_Most_ dashboard actions are documented in the article for the corresponding feature.* > You can also manage Render resources from your terminal with the [Render CLI](cli) or programmatically with the [Render API](api). ## Create a new service Create a new service by clicking the *+ New* button in the top-right corner of the [Render Dashboard][dboard]: [img] Select a service type from the list and complete the creation flow to deploy your code. > *Deploying for the first time?* See our [quickstarts](#quickstarts). ## Create a workspace See [Workspaces, Members, and Roles](team-members). ## Navigating the dashboard Open workspace-wide search with `⌘+K` / `CTRL+K`, then use the arrow keys to jump directly to any resource: [img] While viewing a resource, use the breadcrumbs at the top of the page to navigate to a different service, environment, or project: [img] Switch workspaces using the dropdown at the top of the left pane: [img] ## Manage billing From your workspace's homepage in the [Render Dashboard][dboard] click *Billing* in the left pane: [img] *From this page, you can:* - View and update your plan - Update your payment method - View accrued usage charges for the current billing month - View invoices for past months - View usage against your monthly included amounts of: - [Free instance hours](free#free-instance-hours) - [Outbound bandwidth](outbound-bandwidth) - [Build pipeline minutes](build-pipeline#pipeline-minutes) ## Set your display theme The Render Dashboard provides light and dark display themes, along with high-contrast variants of each. To set your display theme: 1. Open the account menu in the top-right corner of the [Render Dashboard][dboard]. - *If you don't need to toggle high contrast,* click *Theme* to set your display theme and you're all set: [img] When you change your theme this way, Render keeps your current high contrast setting. 2. *If you _do_ need to toggle high contrast,* instead click *Account settings*. 3. Scroll down to the *Appearance* section: [img] 4. Click *Edit* to switch between *Light*, *Dark*, and *System* (which follows your operating system's theme). 5. Click *Save changes*. 6. Separately, use the toggle to enable or disable *High Contrast Mode*. ## Add a user image If you have a [Gravatar](https://gravatar.com/) account associated with your Render account's email address, your Gravatar image appears next to your email address in the top-right corner of the dashboard. Otherwise, a generic user icon is shown. # SSH and Shell Access You can initiate a shell session to your Render service from its *Shell* page in the [Render Dashboard][dboard]: [img] If your service is [scaled](scaling) to multiple instances, you can connect to a specific instance using the *Instance* dropdown. You can also SSH into your services from the terminal after [completing setup](#ssh-setup). ## Compatible service types Support for shell access varies by service type: | Service type | Dashboard shell | SSH | |--------|--------|--------| | *Paid [web service](web-services)* | 🟢 | 🟢 | | *[Private service](private-services)* | 🟢 | 🟢 | | *[Background worker](background-workers)* | 🟢 | 🟢 | | *[Cron job](cronjobs)* | 🟨 [See details.](#cron-job-connections) | ❌ | | *[Free](free) web service* | ❌ | ❌ | | *Other service types (static sites, datastores)* | ❌ | ❌ | ## SSH setup ### 1. Generate an SSH key pair > *Skip this step if you already have an SSH key on your machine that you want to use.* 1. Run the following command to generate an Ed25519 key pair in the `~/.ssh` directory: ```shell ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519 ``` You can optionally use a different [key type](#supported-key-types). 2. The command prompts you to provide an optional passphrase for your private key (recommended for added security). 3. The command generates two files in your `~/.ssh` directory: - `~/.ssh/id_ed25519` (private key) - `~/.ssh/id_ed25519.pub` (public key) > **Your private key is a secret credential. Don't share it with anyone.** > > To enable SSH access, you'll share the _public_ key with Render. ### 2. Add your public key to your Render account 1. Open your [Account settings page](http://dashboard.render.com/settings#ssh-public-keys) in the Render Dashboard. 2. Find the **SSH Public Keys** section and click **+ Add SSH Public Key**. The creation dialog appears. 3. Provide a descriptive **Name** for the key (e.g., "Personal Laptop"). 4. Copy the full contents of your _public_ key file (ends in `.pub`) to your clipboard. On macOS, you can use the `pbcopy` command to copy the file to your clipboard: ```shell pbcopy < ~/.ssh/id_ed25519.pub ``` 5. Paste your public key into the **Key** field: [img] 6. Click **Add SSH Public Key** button to save your key. All set! You're ready to [start an SSH session](#starting-an-ssh-session). ## Starting an SSH session > **SSHing into a Docker-based service?** See [Docker-specific configuration](#docker-specific-configuration). After completing [SSH setup](#ssh-setup), you can start SSH sessions from your terminal using the [Render CLI](cli), or by running the `ssh` command directly. Select a method from the tabs below: **Render CLI** 1. [Install and log in to the Render CLI](cli#setup) if you haven't already. 2. Run the following command: ```shell render ssh ``` This opens an interactive menu that lists your workspace's SSH-compatible services. 3. Use the arrow keys to select a service and press **Enter**. The interactive menu closes and the SSH session starts. To skip menu-based selection, you can include your service's ID directly in the `render ssh` command: ```shell render ssh srv-abc123 ``` **SSH command** 1. In the [Render Dashboard][dboard], open the settings for the service you want to connect to. 2. Click the **Connect** dropdown in the upper right and select the **SSH** tab: [img] > **Don't see the SSH tab?** The selected service is not SSH-compatible. See [Compatible service types](#compatible-service-types). 3. Copy the SSH command to your clipboard. 4. Paste the SSH command into your terminal and run it. ```shell ssh YOUR_SERVICE@ssh.YOUR_REGION.render.com ``` 5. You might see a warning like this: ``` The authenticity of host 'render.com (IP_ADDRESS)' can't be established. ED25519 key fingerprint is (SSH_KEY_FINGERPRINT) Are you sure you want to continue connecting (yes/no)? ``` If you do, confirm that the fingerprint in the message matches Render’s [public key fingerprint](#renders-public-key-fingerprints) for your region. If it does, type `yes` to continue. 6. If you receive a "permission denied" message, see [Troubleshooting permission failures](#troubleshooting-permission-failures). ### Connecting to a specific instance By default, SSH sessions connect to a random running instance of your service. To connect to a _specific_ instance, include that instance's 5-character slug in the hostname of your SSH command: ```shell{outputLines:1,3-4} # Random instance ssh srv-abc123@ssh.oregon.render.com # Specific instance ssh srv-abc123-d4e5f@ssh.oregon.render.com ``` As shown above, you append the instance slug to the service's ID (separated by a hyphen) to form the complete hostname. Instance slugs are visible in your service's [logs](logging) and [application metrics](service-metrics#cpu-and-memory-usage). You cannot SSH into an instance that is no longer running. ### Troubleshooting permission failures If you receive a "Permission denied" error, Render rejected the incoming SSH session. Take the following steps first to troubleshoot this issue: #### Confirm which SSH key you're using Add the "verbose" flag (`-v`) to your SSH command to get more details about which key is being used: ```shell{outputLines:2-9} ssh -v YOUR_SERVICE@ssh.YOUR_REGION.render.com [...] debug1: identity file /Users/YOUR_NAME/.ssh/id_ed25519 type 3 debug1: identity file /Users/YOUR_NAME/.ssh/id_ed25519-cert type -1 [...] debug1: Next authentication method: publickey debug1: Offering public key: /Users/YOUR_NAME/.ssh/id_ed25519 [...] Permission denied (publickey). ``` #### Confirm which keys are attached to your Render account 1. List any keys you have loaded into the [ssh-agent](https://en.wikipedia.org/wiki/Ssh-agent). ```shell ssh-add -l ``` This should should print out a long string of numbers and letters. ``` 256 SHA256:SSH_KEY_FINGERPRINT YOUR_NAME@YOUR_HOST (ED25519) ``` 2. Open your settings page in the [Dashboard](https://dashboard.render.com/) and find the list of SSH public keys. 3. Compare the list of SSH keys with the output from the `ssh-add` command. If you don't see your public key listed, you can [add it to your account](#ssh-setup). ## Render's public key fingerprints Public key fingerprints can be used to validate a connection to a remote server. Render’s public SSH key fingerprints are as follows: | Region | Fingerprint | |--------|--------| | | | | | **Virginia** | `SHA256:NCpSwboPnqL/Nvyy2Qc8Kgzpc3P/f3w5wDphhc+UZO0` | | **Frankfurt** | `SHA256:dBRrCEA0tBkvaYLzzDw/mzaANw6nUJO961Zx806spZs` | | **Singapore** | `SHA256:CUlRyv4TZ0vmHwmhsJkII/pz2cO4IgvR+ykqnRsOQFs` | You can also directly add Render's public keys to your `$SSH_DIR/known_hosts` file. Render's full set of entries is as follows: ```bash # RENDER PUBLIC KEYS # ------------------ # Oregon ssh.oregon.render.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFON8eay2FgHDBIVOLxWn/AWnsDJhCVvlY1igWEFoLD2 # Ohio ssh.ohio.render.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINMjC1BfZQ3CYotN1/EqI48hvBpZ80zfgRdK8NpP58v1 # Virginia ssh.virginia.render.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ6uO0jKQX9IjefnLz+pxTgfPhsPBhNuvxmvCFrxqxAM # Frankfurt ssh.frankfurt.render.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILg6kMvQOQjMREehk1wvBKsfe1I3+acRuS8cVSdLjinK # Singapore ssh.singapore.render.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGVcVcsy7RXA60ZyHs/OMS5aQj4YQy7Qn2nJCXHz4zLA ``` ## Usage details ### Supported key types Render supports the following key types: - `ed25519` - `ecdsa` - `rsa` Render also supports U2F/FIDO hardware authenticated keys like a YubiKey. - `ed25519-sk` - `ecdsa-sk` ### Docker-specific configuration If your service runs a [Docker image](docker), additional configuration is required for it to accept SSH connections: 1. Make sure your image includes openSSH (`openssh-server`). 2. Make sure your Dockerfile creates a `~/.ssh` directory for the running user with the correct permissions (`chmod 0700`). 3. If the running user is not the root user, that user must have shell access. - If your Dockerfile references a parent image, you will need to perform these steps in a Dockerfile that you control, making use of the `USER` instruction to change back to a root user and `usermod` (or equivalent) to modify the non-root user, like so: ```dockerfile # Switch to root to modify user USER root RUN usermod -s /bin/bash myuser # Switch back to non-root user USER myuser ``` Additionally, some configurations are not supported: - If your Dockerfile specifies a root user, the account cannot be locked. Use `usermod --unlock root` or `passwd -u root` to unlock the account. - If your service uses a [persistent disk](disks), you must not mount it to the `$HOME` directory of the running user. ### Cron job connections When you connect to a cron job from the Dashboard shell, Render spins up a new, temporary instance of the service and connects to it. This instance includes your cron job's latest build and configuration. It does _not_ automatically execute the cron job's command. After you close the shell session, Render deprovisions the instance. It is not possible to connect to the actual cron job instances that run as part of your cron schedule. ### Automatic session closure Render automatically closes a service's active SSH sessions in the following cases: - The service is redeployed or restarted for any reason. - Render is scheduled to perform maintenance on underlying infrastructure that enables SSH connections. - In this case, Render gives existing connections one hour before automatically closing them. For long-running commands, consider spinning up a [one-off job](one-off-jobs) instead of SSHing into an active instance. ### Memory usage SSH and dashboard shell sessions use the same memory pool that's allocated for your service instance. Using SSH requires about 2 MB of memory, plus about 3 MB for each active session (not including memory used by processes executed during the session). As an example, let's say we SSH into one service instance from two different computers to run bash. In this case, memory usage would look like this: - 8 MB for SSH - 2 MB for SSH access - 2x3 MB for the two SSH sessions - 7 MB for bash - 2x3.5 MB for the two bash processes Total memory usage in this case is about 15 MB. # Projects and Environments Render *projects* enable you to organize your services by application and environment: [img] For example, one of your applications might include a static site, a GraphQL backend, and a database. By adding all of these services to the same project, you can find and manage them more quickly. Each project has one or more *environments* (such as *Production* in the screenshot above). If you run staging and production versions of your app, you can add each version's services to a different environment. You can also set *[environment-specific controls](#environment-specific-controls):* - [Define environment variables and secret files](#scoped-configuration) that only services in a single environment can access. - Prevent your staging services from inadvertently using a production-specific credential (or vice versa). - Designate an environment as [protected](#protected-environments) so that only your workspace's admins can make potentially destructive changes to its resources. - [Block private network traffic](#blocking-cross-environment-traffic) from entering or exiting a specific environment. - Prevent your staging services from inadvertently accessing a production database (or vice versa). ## Setup > *Hobby workspaces can have up to one project with up to two environments.* > > To create additional projects or environments, [upgrade your workspace](team-members#change-a-workspaces-plan). ### Create a project 1. In the [Render Dashboard][dboard], click *New > Project*: [img] The following form appears: [img] 2. Provide a name for your new project, along with a name for its first environment. - Both of these names are for your own informational purposes. You can change them later. 3. Click *Create a project*. That's it! You're redirected to the page for your new project. ### Add services to an environment If an environment in your project is empty, it displays buttons for creating a new service or moving some of your workspace's existing services into the project: [img] You can specify a new service's associated project and environment during the creation flow. You can bulk-move services to an environment by selecting them in your workspace's service list and then clicking *Move*: [video] You can also move an individual service by opening its *•••* menu and clicking *Move*. ### Open a project Your workspace's homepage in the [Render Dashboard][dboard] lists all projects at the top: [img] Click a project to open it. Services belonging to a project appear on that project's page, _not_ on your workspace's homepage. ### Modify a project - To add an environment to a project, click *+ Add environment* at the top right of the project's page. - To configure an existing environment, click the *•••* menu at the top right of that environment's section on the project's page. - To rename or delete an entire project, click *Settings* in the left pane of the project's page. > *Important:* > > - Deleting a project deletes _all_ of its associated environments and services. > - Deleting an environment deletes all of its associated services. ## Blueprint support [Blueprints](infrastructure-as-code) (Render's infrastructure-as-code model) support creating projects and environments, along with assigning your resources to them: ```yaml projects: - name: my-project environments: - name: production # These resources will belong to the my-project/production environment. # Do not duplicate these definitions at the root level. services: - name: my-web-service type: web envVars: - key: MY_ENV_VAR value: my-value databases: - name: my-database type: postgres envVars: - key: DATABASE_URL fromDatabase: name: my-database property: connectionString envVarGroups: - name: my-env-group envVars: - key: MY_ENV_VAR value: my-value # Environment-specific settings networking: isolation: enabled permissions: protection: enabled ``` For details, see the [YAML reference](blueprint-spec#projects-and-environments). ## Environment-specific controls ### Scoped configuration [Environment groups](configure-environment-variables#environment-groups) are a helpful way to share environment variables and/or secret files across multiple services in your workspace. You can optionally scope an environment group to a single project environment. This helps you share configuration across multiple services in that environment, while also ensuring that services in _other_ environments _can't_ use that environment group. Move an environment group into a project environment from the group's info page by clicking **Manage > Move group**: [img] After you move your environment group, it appears on the corresponding project's overview page: [img] ### Protected environments Workspace members with the [**Admin** role](team-members#member-roles) can designate any project environment as **protected**. This restricts other members from performing potentially destructive actions ([listed below](#restricted-actions)). #### Steps to configure 1. Go to your project's page in the [Render Dashboard][dboard] and scroll to the environment you want to configure. 2. Click the **•••** menu at the top right of the environment, then click **All settings**. 3. Scroll down to the **Permissions** section and click **Edit**: [img] 4. Select **Protected** and click **Save**. Protected environments display a label and a lock icon on your project's page in the Render Dashboard: [img] #### Restricted actions > **Important:** If your protected environment includes resources that are managed via [Blueprints](infrastructure-as-code), non-**Admin** workspace members _can_ still modify those resources by publishing an update to the corresponding `render.yaml` file. Only **Admin** workspace members can perform the following actions in a protected environment: **Resource management** - Deleting any of the environment's resources (services, [environment groups](#scoped-configuration), etc.), or deleting the environment itself - Creating new resources in the environment - Moving resources into or out of the environment **Operational controls** - Modifying access control IPs for any Render Postgres or Key Value instance in the environment - Suspending or resuming any service in the environment - Toggling [maintenance mode](maintenance-mode) for any service in the environment - Accessing the shell for any service in the environment **Secret values** - Viewing or modifying environment variables or secret files for any service or environment group in the environment - Viewing passwords or connection URLs for any Render Postgres or Key Value instance in the environment ### Blocking cross-environment traffic By default, all of your Render services in the same region can communicate over their shared [private network](private-network). You can configure an environment to _block_ private network traffic from crossing its boundary. If you do, services _within_ the environment can still communicate: [diagram] This helps you prevent your staging services from inadvertently accessing a production resource (or vice versa). > **This setting only affects _private network_ traffic.** > > - Web services and static sites in the environment can still receive public internet traffic at their `onrender.com` subdomain, _including_ traffic originating from your services outside the environment. > - Render Postgres and Key Value instances in the environment can still receive traffic at their external URL from [allowed IPs](postgresql-creating-connecting#restricting-external-access). > - Workspace members can still access services in the environment over [SSH](ssh). > - In a [protected environment](#protected-environments), only **Admin** workspace members can access the shell for services in the environment. #### Steps to configure > Toggling this feature does not trigger any deploys or cause any interruptions for your running services. 1. Go to your project's page in the [Render Dashboard][dboard] and scroll to the environment you want to configure. 2. Click the **•••** menu at the top right of the environment, then toggle **Block cross-environment connections**: [img] > **Enabling this feature does not terminate any active network connections.** > > To ensure that all existing connections are terminated, you can restart your services in the environment. You're all set! Your environment now blocks private network traffic from crossing its boundary. ### Environment-level IP rules Enterprise orgs can set inbound IP rules for all services in a particular environment. These rules apply to inbound connections from the public internet. For details, see [Inbound IP rules](inbound-ip-rules). ## FAQ ###### Does an environment named 'Production' or 'Staging' have special restrictions or capabilities? **No.** Render does not apply special logic to any environment based on its name. The examples above use "Production" and "Staging" because they're common. ###### Will my service behave differently after I add it to a project? **Possibly.** If you've configured any [environment-specific controls](#environment-specific-controls) for the service's corresponding environment, those controls apply to the service. For example, if the service's environment [blocks cross-environment network traffic](#blocking-cross-environment-traffic), the service can no longer communicate over your private network with services outside the environment. ###### Can I use projects and environments with Blueprints (Render's infrastructure-as-code model)? **Yes.** You can define projects and environments in your `render.yaml` file, then assign new and existing resources to them. For details, see the [YAML reference](blueprint-spec#projects-and-environments). ###### Are preview environments tied to projects? *No.* You manage your workspace's [preview environments](preview-environments) with [Blueprints](infrastructure-as-code), not projects. A preview environment can include services that belong to any number of different projects. ###### Can I use the same service name in multiple project environments? *No.* All of a workspace's services must have unique names—even services that belong to different project environments. # Scaling Render Services You can run multiple instances of a [web service](web-services), [private service](private-services), or [background worker](background-workers) to handle additional load. For services that receive incoming traffic, Render load balances that traffic evenly across all running instances: [diagram] Each instance of a scaled service uses the same instance type and is [billed accordingly](#billing-for-scaled-services). You can scale each service up to a maximum of 100 instances. Render supports two scaling methods: *manual scaling* and *autoscaling*. | Scaling Method | Description | |--------|--------| | [*Manual scaling*](#manual-scaling) | Render runs a fixed number of instances that you specify. Manual scaling is available for all Render workspaces. | | [*Autoscaling*](#autoscaling) | *Available only for [Professional workspaces](professional-features) and higher.* Render automatically scales your number of instances between a specified minimum and maximum, based on target CPU and/or memory utilization. | ## Manual scaling You can manually scale your service to any fixed number of instances, up to a maximum of 100. 1. In the [Render Dashboard][dboard], open your service's *Scaling* page and scroll down to the *Manual Scaling* section: [img] 2. Drag the slider to the desired number of instances, or enter a value between `1` and `100` in the text box. 3. Click *Save Changes*. Render immediately provisions or deprovisions instances as needed to match the new instance count. Manual scaling events appear in the timeline on your service's *Events* page: [img] ## Autoscaling > *Autoscaling requires a [*Professional workspace*](professional-features) or higher.* Render can automatically scale your service up and down based on CPU and/or memory utilization targets that you specify. This helps you handle periods of high traffic while also minimizing compute costs. Enable autoscaling for your service from its *Scaling* page in the [Render Dashboard][dboard]: [img] 1. Use the slider to set your desired minimum and maximum instance count, or enter a value in each text box. - Render always keeps your instance count within the specified range, even if resource utilization is significantly below or above your specified target. 2. Scroll down to set your target CPU and/or memory utilization: [img] Enable one or both of the toggles and set your target utilization percentage(s). > If you enable _neither_ toggle, autoscaling is _disabled_ for the service. 3. Click *Save Changes*. Render begins monitoring resource utilization and automatically scales your service up or down as needed based on your specified targets. Autoscaling events appear in the timeline on your service's *Events* page: [img] ### How autoscaling works Render periodically calculates average resource utilization across all instances of your autoscaled service. Using that value (`current_util`), Render determines whether to scale your service based on the following formula: ``` new_instances = ceil[current_instances * (current_util / target_util)] ``` If `new_instances` doesn't equal `current_instances`, Render scales your service up or down to the new instance count. > **Render waits a few minutes before scaling a service _down_.** > > If utilization rises again during this period, Render does _not_ scale the service down. This minimizes unnecessary scaling actions during periods of "spiky" usage. > > Render always scales a service _up_ immediately to handle increased load. #### Example 1: Scaling up | Current instances | Current CPU | Target CPU | | ----------------- | ----------- | ---------- | | 2 | 80% | 60% | ``` new_instances = ceil[2 * (80% / 60%)] = 3 ``` In this scenario, Render immediately scales the service up from 2 instances to 3. #### Example 2: Scaling down | Current instances | Current Memory | Target Memory | | ----------------- | -------------- | ------------- | | 5 | 20% | 60% | ``` new_instances = ceil[5 * (20% / 60%)] = 2 ``` In this scenario, Render waits a few minutes, then scales the service down from 5 instances to 2 if memory utilization remains low. > If you set targets for both CPU _and_ memory utilization, Render calculates `new_instances` based on each and uses the larger result. ## Billing for scaled services Billing for a scaled service is based entirely on compute usage, which is prorated by the second. There is no additional cost for performing a scaling action. Here are some example scenarios: | Scenario | Billing Result | | ------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------ | | You run exactly two instances of a service for an entire month. | You're billed for *2x* the monthly price of your service's instance type. | | Exactly halfway through a month, you manually scale your service from two instances down to one. It remains at one instance for the rest of the month. | You're billed for *1.5x* the monthly price of your service's instance type. | | Every day of a month, your service autoscales from one instance to two for exactly six hours. It then autoscales back down to one instance. | You're billed for *1.25x* the monthly price of your service's instance type. | See your exact compute usage for the month on your [Billing page](https://dashboard.render.com/billing). You can also review your [invoice history](https://dashboard.render.com/billing#invoice-history). ## Application considerations - Services with an attached [persistent disk](disks) _cannot_ scale to multiple instances. - You can update your service's scaling configuration programmatically via the [Render API](api). - Separate endpoints are available for [manual scaling](https://api-docs.render.com/reference/scale-service) and [autoscaling](https://api-docs.render.com/reference/autoscale-service). - If you configure both manual scaling _and_ autoscaling for a service, Render enables autoscaling and ignores the manual scaling configuration. ## Horizontal vs. vertical scaling The sections above describe Render's support for *horizontal scaling*, where you adjust a service's number of running instances. In contrast, *vertical scaling* refers to adjusting a service's compute resources (RAM and CPU). You vertically scale a service by changing its instance type in the [Render Dashboard][dboard]. ### When to use each - *Scale horizontally* to handle a higher number of _simultaneous_ tasks (such as incoming requests). - *Scale vertically* if each _single_ task requires additional RAM or CPU to run efficiently. - For particularly resource-intensive tasks, consider offloading to a [background worker](background-workers) to keep your web services responsive. *Horizontal scaling usually occurs much more frequently than vertical scaling.* Autoscaled services might change their instance count multiple times per day, whereas you might upgrade a service's instance type once in a year. # Service Previews Render *service previews* enable you to test out proposed changes to a web service or static site before you deploy those changes to production. [img] For each service preview, Render creates a _separate, temporary instance_ of your service with its own `onrender.com` URL (served over HTTP/2 with full TLS), so you can validate your changes safely. Render automatically sets the HTTP response header `X-Robots-Tag: noindex` for all preview instances. There are two types of service previews: - [*Pull request previews*](#pull-request-previews-git-backed) (for Git-backed services) - [*Image previews*](#image-previews) (for services that deploy a Docker image from a container registry) See details for each below. > *Service previews only replicate the service with proposed changes.* > > To create temporary instances of _multiple_ services (including datastores) for integration testing, see [Preview Environments](preview-environments). ## Pull request previews (Git-backed) For Git-backed services, Render can create a service preview for pull requests opened against your linked branch. You can create a separate preview for every pull request, or only for pull requests that you specify. ### Steps to enable 1. In the [Render Dashboard][dboard], select your service and open its *Previews* tab: 2. Under *Pull Request Previews*, select either *Manual* or *Automatic*: [img] For details on each option, see [Manual vs. automatic previews](#manual-vs-automatic-pr-previews). That's it! After you enable service previews, active preview instances appear on your service's *Previews* tab: [img] Preview details also appear on their associated PR: - *On GitHub,* preview instances are represented as deployments associated with your PR: [img] Click *View deployment* to open the preview instance in your browser. - *On GitLab and Bitbucket,* Render adds a comment to your PR with a link to your preview instance. ### Manual vs. automatic PR previews | Preview Mode | Description | |--------|--------| | *Manual* | By default, Render does _not_ create PR previews. To create a preview for a specific PR, do any of the following: - Add the label `render-preview` to the PR (GitHub/GitLab only). - Include the string `[render preview]` in your PR's _title_ (not the commit message). You can add or remove the above values from an existing PR at any time. If you do, Render creates or deprovisions the associated preview instance accordingly. | | *Automatic* | By default, Render creates a preview instance for _every_ PR against your service's linked branch. To skip creating a preview for a specific PR, do any of the following: - Add the label `render-preview-skip` to the PR (GitHub/GitLab only). - Include any of the following strings in your PR's _title_ (not the commit message): - `[skip preview]` - `[preview skip]` - `[skip render]` - `[render skip]` You can add or remove the above values from an existing PR at any time. If you do, Render creates or deprovisions the associated preview instance accordingly. > *Your pull request's title might be included in the message for its associated merge commit.* If you use `[skip render]` or `[render skip]`, this also [skips the auto-deploy](deploys#skipping-an-auto-deploy) for the service when merged. To avoid this, instead use `[skip preview]` or `[preview skip]`. | ### Working with PR previews - Preview instances copy all of their settings over from their base service when they're first created. *This includes environment variables, such as database connection information.* > Make sure to change environment variables on your preview instance if you want it to use a staging or test database. - Your app can detect whether it's a PR preview by checking the value of the `IS_PULL_REQUEST` environment variable (`true` for a PR preview, `false` otherwise). - Whenever you push to the branch for an open PR, Render automatically updates the PR preview by building and deploying the latest commit. - Render *automatically deletes* a PR preview instance when its associated PR is merged or closed. - You can _manually_ delete a PR preview instance from its *Settings* tab in the [Render Dashboard][dboard]. However, Render _recreates_ the instance if you push new changes to the associated PR branch. - If you're using a monorepo, you can fine-tune its PR preview behavior by [defining the root directory or specifying build filters](monorepo-support#using-with-service-previews). - If you make changes to your base service after creating a PR preview, those changes are _not_ applied to the preview instance. ### Billing for PR previews *PR Previews are billed at the same rate as your base service.* They are always prorated by the second. - If your base service is a free static site, its PR previews are also free. - If your base service costs \$25 per month and one of its PR preview instances is active for _half_ of a month, that preview instance costs a total of \$12.50. ## Image previews For [image-backed services](deploying-an-image), you can create a service preview using the [Render API](api). Specifically, you use the [Create service preview](https://api-docs.render.com/reference/preview-service) endpoint: ``` POST https://api.render.com/v1/services/{serviceId}/preview ``` You can add this API request to your CI pipeline to automatically generate an image preview whenever an image tag is created or updated in your container registry. [See an example.](#example-github-action) Preview instances can deploy any tag or digest for the base service's associated Docker image. For details, see the [API reference](https://api-docs.render.com/reference/preview-service). You can view all active previews from your service's **Previews** tab in the [Render Dashboard][dboard]: [img] ### Working with image previews - Preview instances copy all of their settings over from their base service when they're first created. **This includes environment variables, such as database connection information.** > Make sure to change environment variables on your preview instance if you want it to use a staging or test database. - **Render does not automatically delete image previews.** Make sure to delete image previews when you're finished with them, either from the Render Dashboard or [via the Render API](https://api-docs.render.com/reference/delete-service). - If you make changes to your base service after creating an image preview, those changes are _not_ applied to the preview instance. ### Billing for image previews **An image preview is billed according to the instance type you specify in your API request.** If you don't specify an instance type, Render uses the same instance type as the base service. > If your base service uses a paid instance type, its previews can't use the [Free instance type](free). ### Example GitHub Action This example uses GitHub Actions and pushes images to Docker Hub, but the high-level steps apply to any combination of CI provider and container registry. ```yaml # This GitHub Action demonstrates building a Docker image, # pushing it to Docker Hub, and creating a Render build # preview with every push to the main branch. # # This Action requires setting the following secrets: # # - DOCKERHUB_USERNAME # - DOCKERHUB_ACCESS_TOKEN (create in Docker Hub) # - RENDER_API_KEY (create from the Account Settings page) # - RENDER_SERVICE_ID (the service to create a preview for) # # You must also set env.DOCKERHUB_REPOSITORY_URL below. # # Remember to delete previews when you're done with them! # You can do this from the Render Dashboard or via the # Render API. name: Preview Docker Image on Render # Fires whenever commits are pushed to the main branch # (including when a PR is merged) on: push: branches: ['main'] env: # Replace with the URL for your image's repository DOCKERHUB_REPOSITORY_URL: REPLACE_ME jobs: build: runs-on: ubuntu-latest steps: - name: Check out the repo uses: actions/checkout@v5 - name: Build the Docker image run: docker build . --file Dockerfile --tag $DOCKERHUB_REPOSITORY_URL:$(date +%s) - name: Log in to Docker Hub uses: docker/login-action@v2.2.0 with: username: ${{ secrets.DOCKERHUB_USERNAME }} password: ${{ secrets.DOCKERHUB_ACCESS_TOKEN }} - name: Docker Metadata action uses: docker/metadata-action@v4.6.0 id: meta with: images: ${{env.DOCKERHUB_REPOSITORY_URL}} - name: Build and push Docker image uses: docker/build-push-action@v4.1.1 id: build with: context: . file: ./Dockerfile push: true tags: ${{ steps.meta.outputs.tags }} labels: ${{ steps.meta.outputs.labels }} - name: Create Render service preview uses: fjogeleit/http-request-action@v1 with: # Render API endpoint for creating a service preview url: 'https://api.render.com/v1/services/${{ secrets.RENDER_SERVICE_ID }}/preview' method: 'POST' # All Render API requests require a valid API key. bearerToken: ${{ secrets.RENDER_API_KEY }} # Here we specify the digest of the image we just # built. You can alternatively provide the image's # tag (main) instead of a digest. data: '{"imagePath": "${{ env.DOCKERHUB_REPOSITORY_URL }}@${{ steps.build.outputs.digest }}"}' ``` # Rollbacks To revert undesired code changes as quickly as possible, you can *roll back* your service to a previous successful deploy. Render can reuse [build artifacts](#build-retention) from recent deploys, so rollbacks complete much faster than building a new version of your service. ## Triggering a rollback **Dashboard** 1. In the [Render Dashboard][dboard], go to your service's *Events* page. 2. In the event list, find a recent successful deploy and click *Rollback*: [img] 3. On the confirmation page, click *Rollback to this deploy*. That's it! Render kicks off a _new_ deploy using the target deploy's build artifact. **API** To trigger a rollback via the Render API, use the [Roll back deploy](https://api-docs.render.com/reference/rollback-deploy) endpoint. > *This endpoint does _not_ disable automatic deploys for the service.* > > This means that pushing a new commit to the service's linked branch will trigger a new deploy, which might reintroduce the undesired code change. > > You can disable automatic deploys for the service using the [Update service](https://api-docs.render.com/reference/update-service) endpoint. Set the `autoDeploy` parameter to `false`. ## Reenabling automatic deploys Triggering a rollback in the Render Dashboard automatically disables [autodeploys](deploys#automatic-git-deploys) for the service. This safeguard prevents new changes from triggering a deploy that might reintroduce the undesired code change. After you resolve the underlying issue, you can reenable automatic deploys from your service's *Settings* page: [img] > *Rolling back via the Render API does _not_ disable automatic deploys.* > > For details, see the *API* tab under [Triggering a rollback](#triggering-a-rollback). ## Build retention Render retains a fixed number of recent build artifacts for each service, based on your [workspace plan](pricing). You can only roll back to a particular deploy if its build artifact is still available. ## What's rolled back? ### Service-specific config When you roll back, Render reuses certain configuration details from the target deploy you selected. Other settings use the service's _current_ configuration. > *Rolling back does not overwrite any of your service's current configuration settings.* > > Render reuses the target deploy's settings _only_ for the rollback. When you next trigger a _standard_ deploy, Render uses the service's current configuration as usual. 🟢 Matches the target deploy ❌ Uses the service's current configuration 🟨 Partially matches the target deploy (details provided) | Configuration | Matches target deploy | |--------|--------| | Start command | 🟢 | | [Health check path](deploys#health-checks) | 🟢 | | Docker command | 🟢 | | Registry-hosted Docker image | 🟨 [See details below.](#registry-hosted-docker-images) | | Build artifact | 🟢 | | Instance count | 🟢 | | Environment variables | 🟢 | | Environment groups | 🟨 [See details below.](#environment-groups) | | Disks | ❌ Disks retain state between all deploys and cannot be rolled back. Separately from rolling back, you can [restore a disk snapshot](disks#disk-snapshots). | | Instance type | ❌ If the target deploy requires a larger instance type, consider upgrading your instance type before triggering a rollback. | | Custom domains | ❌ | | Static site redirects and rewrites | ❌ | | Static site headers | ❌ | #### Registry-hosted Docker images If your service pulls and deploys a [prebuilt Docker image](deploying-an-image) from a container registry, the rollback uses the same image tag or digest as the target deploy. As part of the rollback, Render pulls the image again. This has the following implications: - If the target deploy specified its image with a tag, Render pulls the _latest_ image associated with that tag. This image might differ from the one used in the target deploy. - Unlike tags, digests always refer to the exact same image. Using a digest ensures that rollbacks behave more predictably. - If the specified image is no longer available or reachable in the registry, the rollback fails. #### Environment groups [Environment groups](configure-environment-variables#environment-groups) enable sharing configuration across multiple services. Rolling back does _not_ modify any values in an environment group, because other services might also depend on those values. However, rolling back might modify _which_ environment groups are applied to the service (specifically, if the target deploy used a different set of environment groups from the service's current configuration). > *Note the following*: > > - Because rollbacks skip the build step, any recent changes to environment group variables are not reflected in the build artifact. > - If the target deploy included an environment group that has since been deleted, the rollback proceeds without it. ### Platform-level config Rolling back _does not_ revert any changes that Render has made to the underlying platform since the target deploy. For example, if Render has since updated the native runtime for your service's programming language, the rollback uses the updated runtime. # Maintenance Mode To help you make major infrastructure changes safely, you can enable *maintenance mode* for any paid web service: [img] A web service in maintenance mode remains up and running, but it's unreachable from the public internet. This helps you ensure that no user actions are in progress while you make changes. > A web service in maintenance mode is still reachable over your [private network](private-network), and via [SSH](ssh). ## Steps to enable > Maintenance mode is available only for paid [web services](web-services). 1. From your web service's *Settings* page in the [Render Dashboard][dboard], scroll down to the *Maintenance Mode* section: [img] 2. Toggle the switch and confirm your action in the dialog that appears. That's it! After you confirm, Render immediately enables maintenance mode for the service. You can disable it at any time by toggling the switch back. ## Response format While your web service is in maintenance mode, Render responds to every incoming request with a `503 Service Unavailable` status code and your specified *maintenance page*: - By default, Render displays [this maintenance page](https://maintenance-mode-example.onrender.com/). - Set a custom maintenance page by specifying its URL in your service's maintenance mode settings. - *This must _not_ be a URL of the service in maintenance mode.* We recommend providing the URL of a page on a [static site](static-sites). - If your custom URL returns an error, Render responds with that error (not the default maintenance page). # One-Off Jobs Sometimes it's useful to spin up a short-lived process to run a specific task, such as asset compilation or a database migration. You can do this on Render by creating a *one-off job*. A one-off job uses the same build artifact and configuration as one of your existing Render services (this is the job's *base service*). This means the job can execute any of the base service's defined scripts and access its environment variables. > A one-off job cannot access its base service's [persistent disk](disks) (if it has one). While a one-off job is running, it's billed at the per-second rate for its specified [instance type](#instance-type-ids). ## Running a one-off job You create a one-off job with Render API's [Create job](https://api-docs.render.com/reference/post-job) endpoint. > *New to the Render API?* [Get started.](api) This example `curl` command creates a one-off job that runs the command `echo hi` and exits: ```bash curl --request POST 'https://api.render.com/v1/services/YOUR_SERVICE_ID/jobs' \ --header 'Authorization: Bearer YOUR_API_KEY' \ --header 'Content-Type: application/json' \ --data-raw '{ "startCommand": "echo hi" }' ``` To try out this example, substitute your service ID and API key where indicated. - Your service ID is available in the [Render Dashboard][dboard]. Navigate to your service's page and copy the ID from the URL in your browser. - This value starts with `crn-` for cron jobs and `srv-` for other service types. - Learn how to [create an API key](api#1-create-an-api-key). The Create job endpoint requires a `startCommand` parameter, which specifies the command Render will run to start the job. You can optionally specify a `planId` to use a different instance type from the job's base service. [See supported instance types](#instance-type-ids). ### Build and environment On creation, a one-off job obtains the following values from its base service: - The base service's most recent successful build artifact - All of the base service's configured [environment variables](configure-environment-variables) A job uses this "snapshot" of the base service for its own execution. If these values later change in the base service, existing jobs are not affected. ### Response format If creation succeeds, the [Create job endpoint](https://api-docs.render.com/reference/post-job) returns a JSON object with the following fields: ```json { "id": "job-c3rfdgg6n88pa7t3a6ag", "serviceId": "crn-c24q2tmcie6so2aq3n90", "startCommand": "echo hi", "planId": "plan-crn-002", "createdAt": "2025-03-20T12:16:02.544199-04:00" } ``` Next, you can [track the job's progress](#tracking-job-progress). ### Tracking job progress You can track a one-off job's progress in the Render Dashboard or via the Render API. Logs generated by one-off jobs are also included in your workspace's [log stream](log-streams). Select a tab for details: **API** You can poll for a job's status using the Render API's [Retrieve job](https://api-docs.render.com/reference/retrieve-job) endpoint: ```bash curl --request GET 'https://api.render.com/v1/services/YOUR_SERVICE_ID/jobs/YOUR_JOB_ID' \ --header 'Authorization: Bearer YOUR_API_KEY' ``` This endpoint's response includes timestamps for when the job started and finished, along with its status: ```json { "id": "job-c3rfdgg6n88pa7t3a6ag", "serviceId": "crn-c24q2tmcie6so2aq3n90", "startCommand": "echo hi", "planId": "plan-crn-002", "createdAt": "2025-03-20T07:20:05.777035-07:00", "startedAt": "2025-03-20T07:24:12.987032-07:00", // highlight-line "finishedAt": "2025-03-20T07:27:14.234587-07:00", // highlight-line "status": "succeeded" // highlight-line } ``` You can also list a service's jobs (with optional filters) using the [List jobs](https://api-docs.render.com/reference/list-job) endpoint. **Dashboard** In the [Render Dashboard][dboard], you can view the details of current and past one-off jobs from your base service's **Jobs** page: [img] This page also displays the logs for recent job runs. Render retains logs for one-off jobs according to your workspace's [log retention period](logging#retention-period). ### Terminating a job - A one-off job terminates whenever its specified `startCommand` exits. Render automatically deprovisions the job's instance. - You can terminate a one-off job manually with the [Cancel running job](https://api-docs.render.com/reference/cancel-job) endpoint, or from the base service's **Jobs** page in the Render Dashboard. - If a one-off job hasn't exited after 30 days, Render automatically terminates it. - If you redeploy or suspend the base service of a running one-off job, the job is _not_ terminated. It continues running using its existing build artifact and configuration. ## Instance type IDs By default, a one-off job uses the same instance type as its base service and is billed accordingly. You can use a different instance type by providing a `planId` parameter to the [Create job](https://api-docs.render.com/reference/post-job) endpoint. This is most commonly useful for running basic tasks on lower-cost compute. See below for supported values of `planId` according to your base service's type: ### Cron jobs These instance type IDs apply only to cron job services. | Instance Type ID | Instance Type | Specs | |--------|--------|--------| | `plan-crn-003` | Starter | 512 MB RAM 0.5 CPU | | `plan-crn-005` | Standard | 2 GB RAM 1 CPU | | `plan-crn-007` | Pro | 4 GB RAM 2 CPU | | `plan-crn-008` | Pro Plus | 8 GB RAM 4 CPU | ### All other service types These service type IDs apply to web services, private services, and background workers. | Instance Type ID | Instance Type | Specs | |--------|--------|--------| | `plan-srv-006` | Starter | 512 MB RAM 0.5 CPU | | `plan-srv-008` | Standard | 2 GB RAM 1 CPU | | `plan-srv-010` | Pro | 4 GB RAM 2 CPU | | `plan-srv-011` | Pro Plus | 8 GB RAM 4 CPU | | `plan-srv-013` | Pro Max | 16 GB RAM 4 CPU | | `plan-srv-014` | Pro Ultra | 32 GB RAM 8 CPU | # Render Blueprints (IaC) *Blueprints* are Render's infrastructure-as-code (IaC) model for defining, deploying, and managing multiple resources with a single YAML file: [diagram] *Show example Blueprint* ```yaml # This is a basic example Blueprint for a Django web service and # the Render Postgres database it connects to. services: - type: web # A Python web service named django-app running on a free instance plan: free name: django-app runtime: python repo: https://github.com/render-examples/django.git buildCommand: './build.sh' startCommand: 'python -m gunicorn mysite.asgi:application -k uvicorn.workers.UvicornWorker' envVars: - key: DATABASE_URL # Sets DATABASE_URL to the connection string of the django-app-db database fromDatabase: name: django-app-db property: connectionString databases: - name: django-app-db # A Render Postgres database named django-app-db running on a free instance plan: free ``` A Blueprint acts as the single source of truth for configuring an interconnected set of services, databases, and [environment groups](configure-environment-variables#environment-groups). Whenever you update a Blueprint, Render automatically redeploys any affected services to apply the new configuration (you can [disable this](#disabling-automatic-sync)). As your infrastructure grows over time, Blueprints become more and more helpful for managing changes and additions to it. > **Do not manage a particular service, database, or environment group with more than one Blueprint.** > > If you do this, Render always attempts to apply the configuration from whichever Blueprint was synced most recently. If the Blueprints differ in their configuration, this can result in unpredictable behavior for your services. > > To avoid this scenario, make sure that each of your resources is managed by at most one Blueprint. ## Setup 1. From the root of a Git repo, create an empty file named `render.yaml`. - Every Blueprint file uses the name `render.yaml` and resides at the root of a Git repo. 2. Populate `render.yaml` with the details of the resources you want to create and manage. - If you're testing out Blueprints, try pasting the example Blueprint at the [top of this page](infrastructure-as-code). - See also the complete [Blueprint specification reference](blueprint-spec). 3. Commit and push your changes to GitHub or GitLab. 4. Open the [Render Dashboard][dboard] and click **New > Blueprint**: [img] 5. In the list that appears, click the **Connect** button for whichever repo contains your Blueprint. - You'll first need to connect your [GitHub](github)/[GitLab](gitlab) account if you haven't yet. 6. In the form that appears, provide a name for your Blueprint and specify which branch of your repo to link. - Each push to this branch that modifies `render.yaml` triggers a deploy of any added or modified resources. 7. Review the list of the changes that Render will apply based on the linked Blueprint: [img] If your Blueprint file contains errors, the page instead displays details about those errors. 8. If everything looks correct, click **Apply**. You're all set! Render begins provisioning the resources defined in your Blueprint: [img] ## Generating a Blueprint from existing services You can generate a `render.yaml` file using any combination of your existing Render services. This is useful if you want to start managing those exact resources with a Blueprint, or if you want to [replicate those resources](#replicating-a-blueprint). In the [Render Dashboard][dboard], select any number of your services, then click **Generate Blueprint** at the bottom of the page: [img] This opens a page where you can download or copy the generated `render.yaml` file. The page provides additional instructions for creating a Blueprint from that file. > **Important:** For security, the generated `render.yaml` file includes the _names_ of all defined environment variables for the selected services, _but not their values_. Instead, the file sets `sync: false` for each environment variable. > > If you use your `render.yaml` file to create a Blueprint with _new_ services instead of your existing ones, you'll need to provide values for these environment variables. For details, see [Setting environment variables](blueprint-spec#setting-environment-variables). ## Replicating a Blueprint You can create multiple Blueprints from a single `render.yaml` file. Each Blueprint creates and manages a completely independent set of resources. The Blueprint creation flow displays a notice if your new Blueprint matches existing Render resources: [img] To replicate your Blueprint with a separate set of resources, click **Create New Resources**. Render appends a suffix to the name of each new resource to prevent collisions with your existing resources: [img] Click **Apply** to create the new resources as usual. ## Managing Blueprint resources ### Adding an existing resource > **Do not add an existing resource to a Blueprint if it's already managed by _another_ Blueprint.** > > Doing so can lead to unpredictable behavior for your services. You can add an existing Render resource to your Blueprint. To do so, add the resource's details to your `render.yaml` file as you would for a new resource. See all supported fields and values for each service type in the [Blueprint specification reference](blueprint-spec). **Make sure to include all configuration options that are currently set for the resource in the Render Dashboard.** For most services, this includes the service's `name`, `type`, `plan` (instance type), `buildCommand`, `startCommand`, and so on. If you omit some of these options, your Blueprint will use a default value that almost definitely differs from your service's existing configuration. When you next sync your Blueprint, Render applies the new configuration to the existing resource. The resource retains any existing environment variable values that aren't overwritten by the Blueprint. ### Modifying a resource outside of its Blueprint You _can_ still make changes to a Blueprint-managed resource in the [Render Dashboard][dboard]. However, if any of those changes conflict with configuration defined in the Blueprint, they're overwritten the next time you sync your Blueprint. Even if you _delete_ a Blueprint-managed resource in the Render Dashboard, Render recreates it the next time you sync your Blueprint! See [Deleting a resource](#deleting-a-resource). ### Deleting a resource *Syncing a Blueprint never deletes an existing resource.* This is true even if you remove a resource definition from your Blueprint file, or if you disconnect your Blueprint from Render entirely. This is a safeguard against accidental deletions (for example, if you revert your Blueprint to a commit that predates the addition of a critical resource). To delete a Blueprint-managed resource, _first_ remove it from your Blueprint, _then_ delete it in the [Render Dashboard][dboard] as usual. > If you delete a resource in the Render Dashboard but _keep_ it in your Blueprint, Render _recreates_ that resource the next time you sync your Blueprint. ## Disabling automatic sync By default, Render automatically updates affected resources every time you push Blueprint changes to your linked branch. To instead control exactly when you sync a particular Blueprint, set *Auto Sync* to *No* on your Blueprint's Settings page: [img] You can then manually trigger a sync by clicking *Manual Sync* on your Blueprint's page. ## Supported fields and values See the complete [Blueprint specification reference](blueprint-spec). # Blueprint YAML Reference Every [Render Blueprint](infrastructure-as-code) is backed by a YAML file that defines a set of interconnected services, databases, and environment groups. A Blueprint file _must_ be named `render.yaml`, and it _must_ be located in the root directory of a Git repository. This reference page provides an [example Blueprint file](#example-blueprint-file), along with documentation for supported fields. ## Example Blueprint file The following `render.yaml` file demonstrates usage for _most_ supported fields. These fields are documented in further detail below. *Show example Blueprint file* ```yaml:render.yaml ################################################################# # Example render.yaml # # Do not use this file directly! Consult it for reference only. # ################################################################# previews: generation: automatic # Enable preview environments # List services *except* Render Postgres databases here services: # A web service on the Ruby native runtime - type: web runtime: ruby name: sinatra-app repo: https://github.com/render-examples/sinatra # Default: Repo containing render.yaml numInstances: 3 # Manual scaling configuration. Default: 1 for new services region: frankfurt # Default: oregon plan: standard # Default: starter branch: prod # Default: master buildCommand: bundle install preDeployCommand: bundle exec ruby migrate.rb startCommand: bundle exec ruby main.rb autoDeployTrigger: 'off' # Disable automatic deploys maxShutdownDelaySeconds: 120 # Increase graceful shutdown period. Default: 30, Max: 300 domains: # Custom domains - example.com - www.example.org envVars: # Environment variables - key: API_BASE_URL value: https://api.example.com # Hardcoded value - key: APP_SECRET generateValue: true # Generate a base64-encoded 256-bit value - key: STRIPE_API_KEY sync: false # Prompt for a value in the Render Dashboard - key: DATABASE_URL fromDatabase: # Reference a property of a database (see available properties below) name: mydatabase property: connectionString - key: MINIO_PASSWORD fromService: # Reference a value from another service name: minio type: pserv envVarKey: MINIO_ROOT_PASSWORD - fromGroup: my-env-group # Add all variables from an environment group ipAllowList: # Optional (defaults to allow all); Enterprise workspaces only - source: 203.0.113.4/30 description: office - source: 198.51.100.1 description: home # A web service that builds from a Dockerfile - type: web runtime: docker name: webdis repo: https://github.com/render-examples/webdis.git # Default: Repo containing render.yaml rootDir: webdis # Default: Repo root dockerCommand: ./webdis.sh # Default: Dockerfile CMD scaling: # Autoscaling configuration minInstances: 1 maxInstances: 3 targetMemoryPercent: 60 # Optional if targetCPUPercent is set targetCPUPercent: 60 # Optional if targetMemory is set healthCheckPath: / registryCredential: # Default: No credential fromRegistryCreds: name: my-credentials envVars: - key: REDIS_HOST fromService: # Reference a property from another service (see available properties below) type: keyvalue name: lightning property: host - key: REDIS_PORT fromService: type: keyvalue name: lightning property: port - fromGroup: conc-settings # A private service with an attached persistent disk - type: pserv runtime: docker name: minio repo: https://github.com/render-examples/minio.git # Default: Repo containing render.yaml envVars: - key: MINIO_ROOT_PASSWORD generateValue: true # Generate a base64-encoded 256-bit value - key: MINIO_ROOT_USER sync: false # Prompt for a value in the Render Dashboard - key: PORT value: 10000 disk: # Persistent disk configuration name: data mountPath: /data sizeGB: 10 # optional # A Python cron job that runs every hour - type: cron name: date runtime: python schedule: '0 * * * *' buildCommand: 'true' # ensure it's a string startCommand: date repo: https://github.com/render-examples/docker.git # optional # A Dockerfile-based background worker - type: worker name: queue runtime: docker dockerfilePath: ./sub/Dockerfile # Optional dockerContext: ./sub/src # Optional branch: queue # Optional # A static site - type: web name: my-blog runtime: static buildCommand: yarn build staticPublishPath: ./build previews: generation: automatic # Enable service previews buildFilter: paths: - src/**/*.js ignoredPaths: - src/**/*.test.js headers: - path: /* name: X-Frame-Options value: sameorigin routes: - type: redirect source: /old destination: /new - type: rewrite source: /a/* destination: /a ipAllowList: # Optional (defaults to allow all); Enterprise workspaces only - source: 203.0.113.4/30 description: office - source: 198.51.100.1 description: home # A Key Value instance - type: keyvalue name: lightning ipAllowList: # Required - source: 0.0.0.0/0 description: everywhere plan: free # Default: starter maxmemoryPolicy: noeviction # Default: allkeys-lru # List Render Postgres databases here databases: # A database with one read replica - name: elephant databaseName: mydb # Optional (Render may add a suffix) user: adrian # Optional ipAllowList: # Optional (defaults to allow all) - source: 203.0.113.4/30 description: office - source: 198.51.100.1 description: home readReplicas: - name: elephant-replica # A database that allows only private network connections - name: private database databaseName: private ipAllowList: [] # No entries in the IP allow list # A database with specified disk size - name: pachyderm plan: basic-1gb diskSizeGB: 35 # A database that enables high availability - name: highly available database plan: pro-8gb highAvailability: enabled: true # Environment groups envVarGroups: - name: conc-settings envVars: - key: CONCURRENCY value: 2 - key: SECRET generateValue: true - name: stripe envVars: - key: STRIPE_API_URL value: https://api.stripe.com/v2 ``` ## IDE validation The Render Blueprint specification is served from [SchemaStore.org](https://www.schemastore.org/json/), which many popular IDEs use to provide live validation and autocompletion for JSON and YAML files. For VS Code, install the [YAML extension by Red Hat](https://marketplace.visualstudio.com/items?itemName=redhat.vscode-yaml) to enable validation of `render.yaml` files: [img] If your IDE _doesn't_ integrate with SchemaStore.org, the Blueprint specification is also hosted at `https://render.com/schema/render.yaml.json` in JSON Schema format. Consult your IDE's documentation to learn how to use this schema for validation. ## Root-level fields The following fields are valid at the root level of a `render.yaml` file: | Field | Description | |--------|--------| | `services` | A list of _non-Postgres_ services to manage with the Blueprint. Each entry is an object that represents a single service. See all [service fields](#service-fields). Services in this top-level list keep their currently assigned environment (if any) after each sync. - To move a service into a specific environment, instead define it in the `services` list for that [environment](#environment-fields). - To remove a service from its current environment, instead define it under the [`ungrouped`](#ungrouped) field. **Do not define the same service in more than one location.** | | `databases` | A list of Postgres databases to manage with the Blueprint. Each entry is an object that represents a single database. See all [database fields](#database-fields). Databases in this top-level list keep their currently assigned environment (if any) after each sync. - To move a database into a specific environment, instead define it in the `databases` list for that [environment](#environment-fields). - To remove a database from its current environment, instead define it under the [`ungrouped`](#ungrouped) field. **Do not define the same database in more than one location.** | | `envVarGroups` | A list of [environment groups](configure-environment-variables#environment-groups) to manage with the Blueprint. Each entry is an object that represents a single environment group. See [supported fields](#environment-groups). Environment groups in this top-level list keep their currently assigned environment (if any) after each sync. - To move an environment group into a specific environment, instead define it in the `envVarGroups` list for that [environment](#environment-fields). - To remove an environment group from its current environment, instead define it under the [`ungrouped`](#ungrouped) field. **Do not define the same environment group in more than one location.** | | `projects` | A list of [projects](projects) to manage with the Blueprint. A project defines one or more `environments`, each of which lists the services and environment groups that belong to it. For details, see [Projects and environments](#projects-and-environments). | | `ungrouped` | An object for defining resources that should not belong to any [environment](#projects-and-environments). Can contain optional fields `services`, `databases`, and `envVarGroups`, each of which matches the format of its root-level counterpart. `ungrouped: services: - type: web name: my-service #...` Moving a resource definition into this object removes it from its current [environment](#projects-and-environments), guaranteeing that it is "ungrouped". In contrast, root-level definitions keep their currently assigned environment (if any). **Do not define the same resource in more than one location.** | | `previews.generation` | The generation mode to use for [preview environments](preview-environments). `previews: generation: manual` Supported values include: - `off` - `manual` - `automatic` For details on each, see [Manual vs. automatic preview environments](preview-environments#manual-vs-automatic-preview-environments). If you omit this field, preview environments are disabled for any linked Blueprints. Setting the deprecated field `previewsEnabled: true` is equivalent to setting this field to `automatic`. This field does not affect configuration for individual [service previews](service-previews). | | `previews.expireAfterDays` | The number of days to retain a [preview environment](preview-environments) that receives no updates. After this period, Render automatically deprovisions the preview environment to help reduce your compute costs. By default, preview environments are retained indefinitely until their associated pull request is closed. For details, see [Automatic expiration](preview-environments#automatic-expiration). | ## Service fields Each entry in a Blueprint file's `services` list is an object that represents a single, _non-Postgres_ service. (You define Postgres databases in the [`databases` list](#database-fields).) See below for supported fields. ### Essential fields These fields pertain to a service's core configuration (name, runtime, region, and so on). | Field | Description | |--------|--------| | `name` | **Required.** The service's name. Provide a unique name for each service in your Blueprint file. If you add the name of an _existing_ service to your Blueprint file, Render attempts to apply the Blueprint's configuration to that existing service. | | `type` | **Required.** The type of service. One of the following: - `web` for a [web service](web-services) _or_ [static site](static-sites) - For a static site, you also set [`runtime: static`](#runtime). - `pserv` for a [private service](private-services) - `worker` for a [background worker](background-workers) - `cron` for a [cron job](cronjobs) - `keyvalue` for a [Render Key Value instance](key-value) - `redis` is a deprecated alias for `keyvalue`. You can't modify this value after creation. You define Render Postgres databases separately, in the [`databases`](#database-fields) list. | | `runtime` | **Required** unless [`type`](#type) is `keyvalue` or `redis`. The service's runtime. Supported values include: **Native language runtimes** - `node` - `python` - `elixir` - `go` - `ruby` - `rust` **Special-case runtimes** - `docker` for services that [build an image](docker#building-from-a-dockerfile) from a Dockerfile. - `image` for services that [pull a prebuilt image](deploying-an-image) from a registry. - `static` for [static sites](static-sites) You can't modify this value after creation. This field replaces the `env` field (`env` is still supported but is discouraged). | | `plan` | The service's instance type ([see pricing](pricing#services)). One of the following: - `free` (not available for private services, background workers, or cron jobs) - `starter` - `standard` - `pro` - `pro plus` The following additional instance types are available for [web services](web-services), [private services](private-services), and [background workers](background-workers): - `pro max` - `pro ultra` **If you omit this field:** - Render uses `starter` for a new service. - Render retains the current instance type for an existing service. | | `previews.generation` | The preview generation mode to use for this service's [pull request previews](service-previews). Supported values include: - `manual` - `automatic` For details on each, see [Manual vs. automatic PR previews](service-previews#manual-vs-automatic-pr-previews). If you omit this field, pull request previews are disabled for the service. Setting the deprecated field `pullRequestPreviewsEnabled: true` is equivalent to setting this field to `automatic`. This field does not affect configuration for [preview environments](preview-environments). | | `previews.numInstances` | The number of instances to use for this service in [preview environments](preview-environments). If you omit this field, preview instances use the same number of instances as the base service. If the base service uses autoscaling, preview instances use the minimum number of instances for the base service. | | `previews.plan` | The instance type to use for this service in [preview environments](preview-environments). If you omit this field, preview instances use the same instance type as the base service. | | `buildCommand` | **Required** for non-Docker-based services. The command that Render runs to [build your service](deploys#build-command). Basic examples include: - `npm install` (Node.js) - `pip install -r requirements.txt` (Python) | | `startCommand` | **Required** for non-Docker-based services. The command that Render runs to [start your service](deploys#start-command). Basic examples include: - `npm start` (Node.js) - `gunicorn your_application.wsgi` (Python) Docker-based services set the optional [`dockerCommand`](#dockercommand) field instead of this field. | | `schedule` | **Required** for [cron jobs](cronjobs), omit otherwise. The schedule for running the cron job, as a [cron expression](cronjobs#setup). | | `preDeployCommand` | If specified, this command runs _after_ the service’s [`buildCommand`](#buildcommand) but _before_ its [`startCommand`](#startcommand). Recommended for running database migrations and other pre-deploy tasks. Learn more about the [pre-deploy command](deploys#pre-deploy-command). | | `region` | The [region](regions) to deploy the service to. One of the following: - `oregon` (default) - `ohio` - `virginia` - `frankfurt` - `singapore` You can't modify this value after creation. This field does not apply to [static sites](static-sites). If omitted, the default value is `oregon`. | | `repo` | For Git-based services, the URL of the GitHub/GitLab repo to use. Your Git provider account must have access to the repo. If omitted, Render uses the repo that contains the `render.yaml` file itself. For services that pull a prebuilt Docker image, set [`image`](#image) instead of this field. | | `branch` | For Git-based services, the branch of the linked [`repo`](#repo) to use. If omitted, Render uses the repo's default branch. **If you're using [preview environments](preview-environments), you probably _don't_ want to set this field.** If you _do_ set it, Render uses the specified branch in all preview environments, _instead of_ your pull request's associated branch. This prevents you from testing code changes in the preview environment. | | `autoDeployTrigger` | Sets the [automatic deploy](deploys#configuring-auto-deploys) behavior for a Git-based service. This field replaces the deprecated `autoDeploy` field. If you include both, this field takes precedence. One of the following: - `commit`: Trigger a deploy on each commit to the service's linked branch. - Equivalent to the deprecated setting `autoDeploy: true` - `checksPass`: Trigger a deploy only if the linked branch's CI checks pass. - `off`: Disable auto-deploys. - Equivalent to the deprecated setting `autoDeploy: false` This field has no effect for services that [deploy a prebuilt Docker image](deploying-an-image). **If you omit this field:** - Render uses `commit` for a new service. - Render retains the current value for an existing service. | | `domains` | [Web services](web-services) and [static sites](static-sites) only. A list of [custom domains](custom-domains) for the service. Internet-accessible services are always reachable at their `.onrender.com` subdomain. For each root domain in the list, Render automatically adds a `www.` subdomain that redirects to the root domain. For each `www.` subdomain in the list, Render automatically adds the corresponding root domain and redirects it to the `www.` subdomain. | | `healthCheckPath` | [Web services](web-services) only. The path of the service's [health check endpoint](health-checks) for zero-downtime deploys. | | `maxShutdownDelaySeconds` | [Web services](web-services), [private services](private-services), and [background workers](background-workers) only. The maximum amount of time (in seconds) that Render waits for your application process to exit gracefully after sending it a `SIGTERM` signal. For details, see [Zero-downtime deploys](deploys#zero-downtime-deploys). After this delay, Render terminates the process with a `SIGKILL` signal if it's still running. Render most commonly shuts down instances as part of redeploying your service or scaling it down. Set this field to give instances more time to finish any existing work before termination. This value must be an integer between `1` and `300`, inclusive. If omitted, the default value is `30`. | ### Docker The following fields are specific to [Docker-based services](docker). This includes both services that build an image with a `Dockerfile` ([`runtime: docker`](#runtime)) and services that pull a prebuilt image from a registry ([`runtime: image`](#runtime)). #### Building from a `Dockerfile` | Field | Description | |--------|--------| | `dockerCommand` | The command to run when starting the Docker-based service. If omitted, Render uses the `CMD` defined in the `Dockerfile`. | | `dockerfilePath` | The path to the service's `Dockerfile`, relative to the repo root. Typically used for services in a [monorepo](monorepo-support). If omitted, Render uses `./Dockerfile`. | | `dockerContext` | The path to the service's Docker build context, relative to the repo root. Typically used for services in a [monorepo](monorepo-support). If omitted, Render uses the repo root. | | `registryCredential` | If your `Dockerfile` references any private images, you must specify a valid credential that can access those images. This field uses the following format: `registryCredential: fromRegistryCreds: name: my-credentials # The name of a credential you've added to your workspace` Add registry credentials in the [Render Dashboard][dboard] from your Workspace Settings page, or via the [Render API](https://api-docs.render.com/reference/create-registry-credential). | #### Pulling a prebuilt image | Field | Description | |--------|--------| | `image` | Details for the Docker image to pull from a registry. This field uses the following format: `image: url: docker.io/my-name/my-image:latest creds: # Only for private images fromRegistryCreds: name: my-credential-name # The name of a credential you've added to your workspace` Provide `creds` only if you're pulling a private image. Add registry credentials in the [Render Dashboard][dboard] from your Workspace Settings page, or via the [Render API](https://api-docs.render.com/reference/create-registry-credential). For more information, see [Deploy a Prebuilt Docker Image](deploying-an-image). | ### Scaling > **Note the following about [**scaling**](scaling):** > > - You can't scale a service with an attached [persistent disk](disks). > - Autoscaling requires a [**Professional** workspace](professional-features) or higher. > - Manual scaling is available for all workspaces. > - If you add an existing service to a Blueprint, that service retains any existing autoscaling settings unless you add the [`scaling`](#scaling) field in your Blueprint. > - Autoscaling is disabled in [preview environments](preview-environments). > - Instead, autoscaled services always run a number of instances equal to their [`minInstances`](#scaling-1). | Field | Description | |--------|--------| | `numInstances` | For a [manually scaled](scaling#manual-scaling) service, the number of instances to scale the service to. **If you omit this field:** - Render uses `1` for a new service. - Render retains the current value for an existing service. **This value has no effect for services with autoscaling enabled.** Configure autoscaling behavior with the [`scaling`](#scaling-1) field. | | `scaling` | For an [autoscaled](scaling#autoscaling) service, configuration details for the service's autoscaling behavior. Example: `scaling: minInstances: 1 # Required maxInstances: 3 # Required targetMemoryPercent: 60 # Optional if targetCPUPercent is set (valid: 1-90) targetCPUPercent: 60 # Optional if targetMemory is set (valid: 1-90)` | ### Build | Field | Description | |--------|--------| | `buildFilter` | File paths in the service's repo to include or ignore when determining whether to trigger an automatic build. Especially useful for [monorepos](monorepo-support#setting-build-filters). Build filter paths use [glob syntax](monorepo-support#filter-syntax). They are always relative to the repo's root directory. When synced, this value _fully replaces_ an existing service's build filter settings. If you _omit_ this field for a service with existing build filter settings, Render _replaces_ those settings with empty lists. `buildFilter: paths: # Only trigger a build with changes to these files - src/**/*.js ignoredPaths: # Ignore these files, even if they match a path in 'paths' - src/**/*.test.js` | | `rootDir` | The service's root directory within its repo. Changes to files _outside_ the root directory do not trigger a build for the service. Set this when working in a [monorepo](monorepo-support#setting-a-root-directory). If omitted, Render uses the repo's root directory. | ### Disks Attach a [persistent disk](disks) to a compatible service with the `disk` field: ```yaml disk: name: app-data # Required field mountPath: /opt/data # Required field sizeGB: 5 # Default: 10 ``` You can modify the `name` and `mountPath` of an existing disk. You can _increase_ the `sizeGB` of an existing disk, but you can't reduce it. ### Static sites The following fields are specific to [static sites](static-sites): | Field | Description | |--------|--------| | `staticPublishPath` | **Required.** The path to the directory that contains the static files to publish, relative to the repo root. Common examples include `./build` and `./dist`. | | `headers` | Configuration details for a static site's [HTTP response headers](static-site-headers). Example: `headers: # Adds X-Frame-Options: sameorigin to all site paths - path: /* name: X-Frame-Options value: sameorigin # Adds Cache-Control: must-revalidate to /blog paths - path: /blog/* name: Cache-Control value: must-revalidate` You can modify existing header rules and add new ones. Render _preserves_ any existing header rules that are not included in the Blueprint file. | | `routes` | Configuration details for a static site's [redirect and rewrite routes](redirects-rewrites). Example: `routes: # Redirect (HTTP status 301) from /a to /b - type: redirect source: /a destination: /b # Rewrite all /app/* requests to /app - type: rewrite source: /app/* destination: /app` You can modify existing routing rules and add new ones. Render _preserves_ any existing routing rules that are not included in the Blueprint file. | ### Render Key Value You define Render Key Value instances in the `services` field of `render.yaml` alongside your other non-Postgres services. A Key Value instance has the [`type`](#type) `keyvalue` (or its deprecated alias `redis`). #### Example definitions **Show example Key Value definitions** ```yaml services: # A Key Value instance that defines all available fields - type: keyvalue name: thunder ipAllowList: # Allow external connections from only these CIDR blocks - source: 203.0.113.4/30 description: office - source: 198.51.100.1 description: home region: frankfurt # Default: oregon plan: pro # Default: starter previewPlan: starter # Default: use the value for 'plan' maxmemoryPolicy: allkeys-lru # Default: allkeys-lru) # A Key Value instance that allows all external connections - type: keyvalue name: lightning ipAllowList: # Allow external connections from everywhere - source: 0.0.0.0/0 description: everywhere # A Key Value instance that allows only internal connections - type: keyvalue name: private cache ipAllowList: [] # Only allow internal connections ``` #### Key-Value-specific fields | Field | Description | |--------|--------| | `ipAllowList` | **Required.** See [Data access control](#data-access-control). | | `maxmemoryPolicy` | The Key Value instance's eviction policy for when it reaches its maximum memory limit. One of the following: - `allkeys-lru` (default) - `volatile-lru` - `allkeys-random` - `volatile-random` - `volatile-ttl` - `noeviction` For details on these policies, see the [Render Key Value documentation](key-value#maxmemory-policy). | ### Environment variables See [Setting environment variables](#setting-environment-variables). ## Database fields Each entry in a Blueprint file's `databases` list is an object that represents a Render Postgres instance. See below for supported fields. ### Example definitions **Show example database definitions** ```yaml databases: # A basic-4gb database instance with one read replica - name: prod # Required postgresMajorVersion: '18' # Default: most recent supported version region: frankfurt # Default: oregon plan: basic-4gb # Default: basic-256mb databaseName: prod_app # Default: generated value based on name user: app_user # Default: generated value based on name ipAllowList: # Default: allows all connections - source: 203.0.113.4/30 description: office - source: 198.51.100.1 description: home readReplicas: # Default: does not add any read replicas - name: prod-replica # A database that allows only private network connections - name: private database databaseName: private ipAllowList: [] # Only allow internal connections # A database that enables high availability - name: highly available database plan: pro-16gb highAvailability: enabled: true ``` ### Essential fields | Field | Description | |--------|--------| | `name` | **Required.** The Postgres instance's name. Provide a unique name for each service in your Blueprint file. If you add the name of an _existing_ instance to your Blueprint file, Render attempts to apply the Blueprint's configuration to that existing instance. You can't modify this value after creation. | | `plan` | The database's instance type ([see pricing](pricing#postgresql)). One of the following: **View values for plan** **Current instance types:** - `free` - `basic-256mb` - `basic-1gb` - `basic-4gb` - `pro-4gb` - `pro-8gb` - `pro-16gb` - `pro-32gb` - `pro-64gb` - `pro-128gb` - `pro-192gb` - `pro-256gb` - `pro-384gb` - `pro-512gb` - `accelerated-16gb` - `accelerated-32gb` - `accelerated-64gb` - `accelerated-128gb` - `accelerated-256gb` - `accelerated-384gb` - `accelerated-512gb` - `accelerated-768gb` - `accelerated-1024gb` **[**Legacy instance types**](postgresql-legacy-instance-types):** - `starter` - `standard` - `pro` - `pro plus` You cannot create new databases on a [legacy instance type](postgresql-legacy-instance-types). You can move a database from a legacy instance type to a current instance type, but you can't move it back. **If you omit this field:** - Render uses `basic-256mb` for a new database. - Render retains the current instance type for an existing database. | | `previewPlan` | The instance type to use for this database in [preview environments](preview-environments). If you omit this field, preview instances use the same instance type as the primary database (specified by [`plan`](#plan-1)). > If your primary database uses a new [flexible instance type](postgresql-refresh), you cannot specify a _non_-flexible instance type for `previewPlan` (or vice versa). | | `diskSizeGB` | The database's disk size, in GB. Not valid for [legacy instance types](postgresql-legacy-instance-types), which have a fixed disk size. This value must be either `1` or a multiple of `5`. You can increase disk size, but you can't _decrease_ it. **If you omit this field:** - For a new database, Render uses a default disk size based on the instance type's tier: - Free: 1 GB - Basic: 15 GB - Pro: 100 GB - Accelerated: 250 GB - For an existing database, Render retains the current disk size. | | `previewDiskSizeGB` | The disk size to use for this database in [preview environments](preview-environments). If you omit this field, preview instances use the same disk size as the primary database (specified by [`diskSizeGB`](#disksizegb)). | | `region` | The [region](regions) to deploy the instance to. One of the following: - `oregon` (default) - `ohio` - `virginia` - `frankfurt` - `singapore` You can't modify this value after creation. If omitted, the default value is `oregon`. | | `ipAllowList` | See [Data access control](#data-access-control). | ### PostgreSQL settings | Field | Description | |--------|--------| | `postgresMajorVersion` | The major version number of PostgreSQL to use, as a string (e.g., `"17"`). If omitted, Render uses the most recent version supported by the platform (currently ). You can't modify this value after creation. | | `databaseName` | The name of your database in the PostgreSQL instance. This is different from the [`name`](#name-1) of the Render Postgres instance itself. If omitted, Render automatically generates a name for the database based on [`name`](#name-1). You can't modify this value after creation. | | `user` | The name of the PostgreSQL user to create for your instance. If omitted, Render automatically generates a name for the database based on [`name`](#name-1). You can't modify this value after creation. | ### Database replicas You can add two types of replica to a Render Postgres instance: - [Read replicas](#readreplicas) for increased query throughput - A [high availability standby](#highavailability) for rapid recovery from primary instance failures | Field | Description | |--------|--------| | `readReplicas` | Add one or more read replicas to a Render Postgres instance with the following syntax: `readReplicas: - name: my-db-replica` Note the following: - You can add up to five read replicas to a given Render Postgres instance. - If you omit this field, Render _preserves_ any existing read replicas for the instance. - If you provide different `name` values from a database's existing read replicas, Render creates a _new_ replica for each new name and _destroys_ any existing replicas that don't match any provided name. - If you provide an empty list (e.g., `readReplicas: []`), Render destroys any existing replicas and does _not_ create new replicas. - You can reference a read replica's properties in another service's environment variables, as you would for any other database. See [Referencing values from other services](#referencing-values-from-other-services). For more information, see [Read Replicas for Render Postgres](postgresql-read-replicas). | | `highAvailability` | Add a high availability **standby** to a Render Postgres instance with the following syntax: `highAvailability: enabled: true` **For your database to support high availability, it _must_:** - Belong to a [**Professional** workspace](professional-features) or higher - Use the **Pro** instance type or higher - Use **PostgreSQL version 13 or later** For more information, see [High Availability for Render Postgres](postgresql-high-availability). | ## Data access control To control which IP addresses can access your Render Postgres and Key Value instances from outside Render's network, use the `ipAllowList` field: ```yaml ipAllowList: - source: 203.0.113.4/30 description: office # optional - source: 198.51.100.1 ``` The `ipAllowList` field is required for Key Value instances. If you omit this field for a Render Postgres database, _any_ source with valid credentials can access the database. IP address ranges use [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_blocks). The `description` field is optional. To block _all_ external connections, provide an empty list: ```yaml ipAllowList: [] # Only allow internal connections ``` To _allow_ all external connections, provide the following CIDR block: ```yaml ipAllowList: # allow external connections from everywhere - source: 0.0.0.0/0 description: everywhere ``` Learn more about access control for [Render Postgres](postgresql-creating-connecting#restricting-external-access) and [Render Key Value](key-value#enabling-external-connections). ## Projects and environments > Learn more about [projects and environments](projects). **Show an example project definition** ```yaml projects: - name: my-project environments: - name: production # These resources will belong to the my-project/production environment. # Do not duplicate these definitions at the root level. services: - name: my-web-service type: web runtime: node buildCommand: npm install startCommand: npm start envVars: - key: MY_ENV_VAR value: my-value databases: - name: my-database plan: basic-256mb envVarGroups: - name: my-env-group envVars: - key: MY_ENV_VAR value: my-value # Environment-specific settings networking: isolation: enabled permissions: protection: enabled ``` ### Project fields | Field | Description | |--------|--------| | `name` | **Required.** The project's name. | | `environments` | **Required.** A list of the project's [environments](#environment-fields). Each project must have at least one environment. | ### Environment fields | Field | Description | |--------|--------| | `name` | **Required.** The environment's name. | | `services` | A list of the [services](#services) that belong to the environment. Matches the format of the root-level [`services`](#services) field. **Do not define the same service in more than one location.** | | `databases` | A list of the [Render Postgres databases](#databases) that belong to the environment. Matches the format of the root-level [`databases`](#databases) field. **Do not define the same database in more than one location.** | | `envVarGroups` | A list of the environment groups that belong to the environment. Matches the format of the root-level [`envVarGroups`](#environment-groups) field. **Do not define the same environment group in more than one location.** | | `networking.isolation` | Controls [private network isolation](projects#blocking-cross-environment-traffic) for the environment. `networking: isolation: enabled # Block private network traffic into/out of environment` Supported values include: - `enabled` - `disabled` If omitted, the default value is `disabled`. | | `permissons.protection` | Controls whether the environment is [protected](projects#protected-environments), which prevents destructive actions by non-admin workspace members. `permissions: protection: enabled # Prevent destructive actions by non-admins` Supported values include: - `enabled` - `disabled` If omitted, the default value is `disabled`. | ## Setting environment variables Set names and values for a service's environment variables in the `envVars` field: ```yaml envVars: # Sets a hardcoded value # (DO NOT hardcode secrets in your Blueprint file!) - key: API_BASE_URL value: https://api.example.com # Generates a base64-encoded 256-bit value # (unless a value already exists) - key: APP_SECRET generateValue: true # Prompts for a value in the Render Dashboard on creation # (useful for secrets) - key: STRIPE_API_KEY sync: false # References a property of a database # (see available properties below) - key: DATABASE_URL fromDatabase: name: mydatabase property: connectionString # References an environment variable of another service # (see available properties below) - key: MINIO_PASSWORD fromService: name: minio type: pserv envVarKey: MINIO_ROOT_PASSWORD # Adds all environment variables from an environment group - fromGroup: my-env-group ``` > A Blueprint can create new environment variables or modify the values of existing ones. Render _preserves_ existing environment variables, even if you omit them from the Blueprint file. ### Referencing values from other services When setting an environment variable in a Blueprint file, you can reference certain values from your other Render services. > You _can_ reference a service that isn't in the Blueprint, but that service must exist in your workspace for the Blueprint to be valid. To reference a value from _most_ service types, use the `fromService` field. For Render Postgres, instead use `fromDatabase`: ```yaml # Any non-Postgres service - key: MINIO_HOST fromService: name: minio type: pserv property: host # Render Postgres - key: DATABASE_URL fromDatabase: name: mydatabase property: connectionString ``` To reference another service's environment variable, set `envVarKey` instead of `property`: ```yaml - key: MINIO_PASSWORD fromService: name: minio type: pserv envVarKey: MINIO_ROOT_PASSWORD ``` - **In all cases,** provide the service's `name`, along with the `property` or `envVarKey` to use. - **For `fromService`,** you must also provide the referenced service's [`type`](#type). Supported values of `property` include: | Property | Description | |--------|--------| | `host` | [Web services](web-services) and [private services](private-services) only. The service's hostname on the [private network](private-network). | | `port` | [Web services](web-services) and [private services](private-services) only. The port of the service's HTTP server. | | `hostport` | [Web services](web-services) and [private services](private-services) only. The service's [host](#host) and [port](#port), separated by a colon. Use this value to connect to the service over the [private network](private-network). Example: `my-service:10000` | | `connectionString` | Render Postgres and Key Value only. The URL for connecting to the datastore over the [private network](private-network). - **For Render Postgres,** has the format `postgresql://user:password@host:port/database` - **For Render Key Value,** has the format `redis://red-xxxxxxxxxxxxxxxxxxxx:6379` (or `redis://user:password@red-xxxxxxxxxxxxxxxxxxxx:6379` if [internal authentication](key-value#requiring-auth-for-internal-connections) is enabled) | | `user` | Render Postgres only. The name of the user for your PostgreSQL database. Included as a component of [`connectionString`](#connectionstring). | | `password` | Render Postgres only. The password for your PostgreSQL database. Included as a component of [`connectionString`](#connectionstring). | | `database` | Render Postgres only. The name of your database within the PostgreSQL instance (_not_ the `name` of the PostgreSQL instance itself). Included as a component of [`connectionString`](#connectionstring). | ### Prompting for secret values Some environment variables contain secret credentials, such an API key or access token. **Do not hardcode these values in your `render.yaml` file!** Instead, you can define these environment variables with `sync: false`, like so: ```yaml - key: STRIPE_API_KEY sync: false ``` During the initial Blueprint creation flow in the [Render Dashboard][dboard], you're prompted to provide a value for each environment variable with `sync: false`: [img] **Note the following limitations:** - Render prompts you for these values _only during the initial Blueprint creation flow_. - When you update an existing Blueprint, Render _ignores_ any environment variables with `sync: false`. - Add any new secret credentials to your existing services [manually](configure-environment-variables#setting-environment-variables). - Render does not include `sync: false` environment variables in [preview environments](preview-environments). - As a workaround, you can _also_ manually define the environment variable in an environment group that you apply to the service. For details, see [this page](preview-environments#placeholder-environment-variables). - You can't apply `sync: false` to environment variables defined in an [environment group](#environment-groups). - If you do this, Render ignores the environment variable. ### Generating random secrets You can generate a random value for an environment variable by setting `generateValue: true`: ```yaml - key: JWT_SECRET generateValue: true ``` If the environment variable doesn't already exist, Render adds it and sets its value to a randomized, base64-encoded, 256-bit value (looks like this: `B0jrphAPOY7pg92AN0c9MN4yecczLMdwnx4OkA1KFUk=`). ### Environment groups You can define [environment groups](configure-environment-variables#environment-groups) in the root-level `envVarGroups` field of your `render.yaml` file: ```yaml envVarGroups: - name: my-env-group envVars: - key: CONCURRENCY value: 2 - key: SHARED_SECRET generateValue: true ``` Each environment group has a `name` and a list of zero or more `envVars`. Definitions in the `envVars` list can use some (_but not all_) of the same formats as [`envVars` for a service](#setting-environment-variables): - An environment group can't [reference values](#referencing-values-from-other-services) from your services, or from other environment groups. - You can't define an environment variable with `sync: false` in an environment group. ### Variable interpolation Render does not support variable interpolation in a `render.yaml` file. To achieve a similar behavior, pair environment variables with a build or start script that performs the interpolation for you. # Preview Environments > Preview environments require a [*Professional* workspace](professional-features) or higher. It is critical to have testing and staging environments accurately reflect production, but achieving this can be a major operational hassle. Most engineering teams use a single staging environment which makes it hard for developers to test their changes in isolation; the alternative is for devops teams to spin up new testing or staging environments manually and tear them down after testing is done. Render's *preview environments* solve this problem by automatically creating a fresh copy of your production environment (including services, databases, and environment groups) on every pull request, so you can test your changes with confidence without affecting staging or relying on devops teams to create and destroy infrastructure. > A preview environment creates new instances of the services and datastores defined in your Blueprint. These instances do not copy any data from existing services. If you need to run any initial setup (such as seeding a database) you can use [Preview Environment Initialization](#preview-environment-initialization). Render keeps your preview environments up to date on every commit and automatically destroys them when the original pull request is merged or closed. You can also set up an expiry time to automatically clean up preview environments after a period of inactivity. Preview environments can be helpful in a lot of cases: - Share your changes live in code reviews: no more Git diffs for visual changes! - Get shareable links for upcoming features and collaborate more effectively with internal and external stakeholders. - Run CI tests against a high fidelity copy of your production environment before merging. ## Getting started 1. Make sure your services and databases are defined in a `render.yaml` file and synchronized as a Blueprint in the [Render Dashboard][dboard]. - For details, see [Render Blueprints](infrastructure-as-code). 2. At the root level of your `render.yaml` file, enable preview environments by setting the `previews.generation` key to one of `manual` or `automatic`: ```yaml{1-2} previews: generation: automatic services: - type: web ... ``` For details on each option, see [Manual vs. automatic preview environments](#manual-vs-automatic-preview-environments). > Setting the deprecated field `previewsEnabled: true` is equivalent to setting this field to `automatic`. 3. Merge your changes to your Blueprint's linked branch. You're all set! Open a new pull request in your repository and see your preview environment deploy with status updates right in the pull request. You can visit the URL for your preview environment by clicking **View deployment** next to your web service deployment. [img] > As of this writing, GitLab does not support status updates on merge requests. If you'd like to try this for yourself, fork our [Preview Environments example repository](https://github.com/render-examples/preview-environment), synchronize the `render.yaml` file [in your dashboard](https://dashboard.render.com/blueprints), and open a new pull request. > If you explicitly set a `branch` for your services in `render.yaml` then that would be used to deploy a preview environment as well which may not be expected behavior. Typically, if you're using preview environments you don't need to specify a branch as we would use the branch the blueprint was created for initially and then the branch the pull request is against to create the preview environment. ### Manual vs. automatic preview environments | Preview Mode | Description | |--------|--------| | **Manual** | By default, Render does _not_ create preview environments for PRs. To create a preview environment for a specific PR, include the string `[render preview]` in your PR's _title_ (not the commit message): `[render preview] Update homepage` You can also edit an _existing_ PR's title to add or remove `[render preview]`. If you do, Render provisions or deletes associated preview instances accordingly. | | **Automatic** | By default, Render creates a preview environment for _every_ PR against your Blueprint's linked branch. To skip creating a preview environment for a specific PR, include any of the following strings in your PR's _title_ (not the commit message): - `[skip preview]` - `[skip render]` - `[preview skip]` - `[render skip]` You can also edit an _existing_ PR's title to add or remove one of these strings. If you do, Render provisions or deletes associated preview instances accordingly. > Your pull request's title might be included in the message for its associated merge commit. If you use `[skip render]` or `[render skip]`, this also [skips the auto-deploy](deploys#skipping-an-auto-deploy) for the service when merged. To avoid this, instead use `[skip preview]` or `[preview skip]`. | ### Override preview instance types Services in a preview environment can use a different instance type from their production counterparts. By using smaller instance types for preview environments, you can reduce costs. - For Render Postgres and Key Value instances, set the `previewPlan` field. - For all other service types, set the `previews.plan` field. > If your Render Postgres database uses a new [flexible plan](postgresql-refresh), you cannot specify a _non_-flexible instance type for its `previewPlan` (or vice versa). [See supported values.](blueprint-spec#plan-1) See example `render.yaml` declarations below. For all supported values, see the [Blueprint YAML Reference](blueprint-spec#plan). If you don't specify a preview instance type for a service, Render uses the same instance type that you use in production. ```yaml previews: generation: automatic services: - type: web plan: standard previews: plan: starter name: express-server runtime: node - type: keyvalue plan: standard previewPlan: starter name: private cache ipAllowList: [] # only allow internal connections databases: - name: my_test_db plan: pro-4gb previewPlan: basic-1gb previewDiskSizeGB: 5 ``` ### Override number of preview instances Web services, private services, and background workers in a preview environment can use a different number of instances from their production counterparts. By using fewer instance types for preview environments, you can reduce costs. See example `render.yaml` declarations below. For all supported values, see the [Blueprint YAML Reference](blueprint-spec#plan). If you don't specify a preview number of instances for a service, Render uses the same number of instances that you use in production. If autoscaling is configured, Render uses the minimum number of instances. ```yaml previews: generation: automatic services: - type: web plan: standard numInstances: 7 previews: numInstances: 6 name: express-server runtime: node - type: web plan: standard scaling: minInstances: 2 maxInstances: 10 targetCPUPercent: 70 previews: numInstances: 6 name: autoscaling-express-server runtime: node ``` ### Environment variables You can override environment variables in preview environments with `previewValue`. This can be useful if you need to override a production API key with a test key, or if you'd like to use a single database across all preview environments. Environment variable overrides are supported for web services, private services, and [environment groups](blueprint-spec#environment-groups). ```yaml previews: generation: automatic services: - type: web plan: standard name: express-server runtime: node envVars: - key: MY_API_KEY value: production-api-key previewValue: test-api-key ``` #### Placeholder environment variables [Placeholder environment variables](blueprint-spec#prompting-for-secret-values) defined with `sync: false` are not copied to preview environments. To share secret variables across preview environments: 1. Manually create an environment group in the [Dashboard](https://dashboard.render.com/new/env-group). 2. Add one or more environment variables. 3. Reference the environment group in your `render.yaml` file, as needed. ```yaml previews: generation: automatic services: - type: web plan: standard name: express-server runtime: node envVars: # The value for `MY_API_KEY` provided in the Dashboard will *not* be # copied to preview environments. - key: MY_API_KEY sync: false # Any values in this group will be copied to preview environments, # if `all-settings` exists and is *not* included in this file. - fromGroup: all-settings ``` > You can also use an environment group that’s managed by a Blueprint, if it’s not the same Blueprint that you’re using to manage your preview environments. > > If you use the same Blueprint for both, a new environment group will be created for each preview environment. Placeholder environment variables will not be copied to these environment groups. ### Preview environment initialization You may want to run custom initialization for your preview environment after it is created but not on subsequent deploys, for example to seed a newly created database or download files to disk. You can do this by specifying a command to run after the first successful deploy with `initialDeployHook`. ```yaml previews: generation: automatic services: - type: web plan: standard name: express-server runtime: node initialDeployHook: ./seed_database.sh ``` ### Automatic expiration You can set the number of days a preview environment can exist without any new commits to help manage costs. Set `previews.expireAfterDays` to automatically delete the environment after the specified number of days of inactivity. The default is no expiry. The expiration time is reset with every push to the preview environment. ```yaml previews: generation: automatic expireAfterDays: 3 services: - type: web plan: standard name: express-server runtime: node ``` ## Root directory and build filters If you [define the Root Directory or specify Build Filters](monorepo-support) for each service in your Blueprint Spec, Render will only create a preview environment if the files changed in a pull request match the Root Directory or Build Filter paths for at least one service. ## Preview environment billing Preview resources are billed just like regular Render services and are prorated by the second. See [Render Pricing](pricing) for service and instance type details. # Render Terraform Provider You can use Render's official [Terraform provider](https://registry.terraform.io/providers/render-oss/render/latest) to incorporate your Render resources into your existing Terraform configuration. This enables you to manage Render services alongside the rest of your infrastructure. *See an example resource declaration* ```hcl # Basic example web service configuration resource "render_web_service" "web" { name = "terraform-web-service" plan = "starter" region = "oregon" start_command = "npm start" runtime_source = { native_runtime = { auto_deploy = true branch = "main" build_command = "npm install" repo_url = "https://github.com/render-examples/express-hello-world" runtime = "node" } } } ``` Documentation for the provider is available in the Terraform Registry: ## Terraform or Blueprints? We recommend using [Blueprints](infrastructure-as-code) (Render's IaC model) where possible. If you don't need to include any non-Render infrastructure in your configuration, it's quicker to get started with Blueprints, and they're tightly integrated with the Render platform. If you _do_ need to manage your Render services alongside other infrastructure, use the Render Terraform provider. # Health Checks > Health checks are currently available only for [web services](web-services). You can (and should!) define a *health check endpoint* for every web service to help Render determine whether it's ready to receive traffic. Render sends an HTTP request to this endpoint as part of [zero-downtime deploys](deploys#zero-downtime-deploys), and also every few seconds to verify the health of running services. Set your health check endpoint path in the [Render Dashboard][dboard] from your web service's *Settings* page: [img] If you manage your service with a Blueprint, instead set the [`healthCheckPath`](blueprint-spec#healthcheckpath) field in your `render.yaml` file. ## Health check protocol With every health check, Render sends an HTTP `GET` request to each service instance's health check endpoint. If your service has at least one [custom domain](custom-domains), Render sets one of those domains as the value of the `Host` header for the request. Otherwise, Render uses the service's `onrender.com` subdomain. - *The check succeeds* if your health check endpoint responds with a `2xx` or `3xx` status code. Render considers the instance healthy. - *The check fails* in all other cases (including after a 5-second response timeout). Render considers the instance _potentially_ unhealthy. If a potentially unhealthy instance continues to fail its health checks, Render takes the following actions: - During a [zero-downtime deploy](deploys#zero-downtime-deploys): - If a new instance fails all of its health checks for 15 consecutive minutes, Render cancels the deploy and continues routing traffic to existing instances. - For an actively running service: - If an instance fails all of its health checks for 15 consecutive seconds, Render stops routing traffic to it to give it an opportunity to recover. - After 60 consecutive seconds of failed health checks, Render automatically [restarts the service](deploys#restarting-a-service). In the event of a canceled deploy or a service restart, Render [notifies you](notifications) according to your settings. > *The actions your endpoint should take to verify service health depend on your service's details.* > > We recommend performing operation-critical checks, such as executing a simple database query to confirm connectivity. # Best Practices for Maximizing Uptime To help keep your Render services healthy and responsive, we recommend the following best practices. Many of these apply to services on _any_ deployment platform—not just Render! ## Run more than one instance Server hardware isn't perfect, and neither are the data centers that orchestrate that hardware. When you [scale your service to multiple instances](scaling), Render runs those instances on different nodes. This means that if a particular instance (or an entire node) goes down, at least one instance of your service stays up and running. [diagram] When issues like these occur, Render also _automatically_ moves affected services to new instances, but this can take a few minutes. By running multiple instances, your service remains up during the automatic transition. ## Enable health checks Sometimes a service instance gets into an unresponsive state and needs a quick restart. Render can detect this situation with [*health checks*](deploys#health-checks). You define an HTTP endpoint path in your service that always returns a `2xx` response (if the service is functioning normally), and Render sends periodic requests to that path. If those requests fail several times in a row for a particular instance, Render restarts it. Health checks also protect you from bad deploys: if a new deploy fails its health check, Render keeps the previous deploy running. [Learn more about health checks.](deploys#health-checks) ## Log the `CF-Ray` ID of each request All inbound requests to Render web services pass through Cloudflare for [DDoS protection](ddos-protection): [diagram] Cloudflare assigns a unique ID to each request and sets it as the value of the `CF-Ray` HTTP header. Render includes this header in the request it sends along to your service. Whenever your web service receives an incoming request, it should include the value of the `CF-Ray` header in all logs generated for that request, including logging as soon as the request is received. Tracking the `CF-Ray` ID for each request helps you trace the execution of your individual requests, and it also helps Render's support team diagnose any issues that might occur earlier in this request flow. ## Set up an external monitoring probe An external monitoring probe is similar to a [health check](#enable-health-checks), but it sends periodic HTTP requests to your web service from _outside_ Render. This more closely simulates traffic from your service's users. We recommend creating your probe with a third-party monitoring provider, such as [HeyOnCall](https://heyoncall.com/guides/monitoring-your-render-app-with-heyoncall) or [Better Stack](https://betterstack.com/docs/uptime/monitoring-start/). In case of an incident, your provider will send a notification that includes the `CF-Ray` ID returned by Cloudflare. This is handy for debugging in combination with your [service's logs](#log-the-cf-ray-id-of-each-request), or for sharing with Render support. ## Add retry logic to clients If you maintain long-lived connections to your service (such as over WebSocket), make sure to implement retry logic for those connections. Render routes each connection to a _particular instance_ of your service running on a _particular machine_, and Render might replace an instance at any time as part of a deploy or standard maintenance. Replacing an instance this way is a [zero-downtime](deploys#zero-downtime-deploys) event, but terminating the old instance does by necessity terminate all connections to it. By implementing retry logic, you can quickly restore your long-lived connection to a running instance. ## Test your data backups Database hiccups happen, and you can resolve them much faster when you've prepared ahead of time. Make sure you've thoroughly tested your data backup and [recovery procedure](postgresql-backups#perform-a-recovery), so you can fix your service as quickly as possible whenever the time comes. # Render Webhooks You can configure *webhooks* for your Render workspace to notify other systems when specific service events occur (such as a deploy starting or a service scaling down): [diagram] Use webhooks to trigger custom actions in third-party services (chat platforms, CI/CD, etc.) or in your own Render apps. > *Webhooks require a Professional plan or higher.* > > - *Professional* workspaces can push webhook events to one destination URL. > - *Organization* and *Enterprise* workspaces can push different sets of events to up to 100 destination URLs. ## Example apps These example apps demonstrate listening for Render webhook notifications and performing custom actions in response. Fork them on GitHub to get started quickly. | Integration | Description | |--------|--------| | [Basic webhook logger](https://github.com/render-examples/webhook-receiver) | On receiving any Render webhook notification, this app logs the payload to standard output. Also demonstrates fetching additional data about the event and the corresponding service from the Render API. | | [GitHub Actions trigger](https://github.com/render-examples/webhook-github-action) | Demonstrates triggering a Github Action after receiving a webhook. This example waits for a successful `deploy_ended` event, upon which it triggers the deploy of a dependent Render service. | | [Discord bot](https://github.com/render-examples/webhook-discord-bot) | On receiving the `server_failed` webhook event, this app sends a message to a Discord channel. | ## Setup ### 1. Set up an HTTPS endpoint Render sends webhook notifications as HTTPS POST requests to an endpoint you specify. This must be a URL that's reachable over the public internet—such as one hosted by a Render [web service](web-services)! The [examples](#example-apps) above demonstrate setting up a simple app that listens for webhook notifications. While you're getting started, you can use an endpoint provided by a webhook testing tool. Many online tools provide a unique temporary URL for receiving webhook notifications and inspecting their payloads. > *Render expects your endpoint to respond to incoming notifications with a 2xx-level HTTP status code within 15 seconds.* > > Full details of Render's webhook communication protocol are described [below](#communication-protocol). ### 2. Create a webhook > Only workspace [admins](team-members#member-roles) can create and modify webhooks. When your HTTPS endpoint is ready, you can create a webhook to start pushing notifications to it: 1. From your workspace home in the [Render Dashboard][dboard], click *Integrations > Webhooks* in the left sidebar. 2. Click *+ Create Webhook*. The following form appears: [img] 3. Provide a *Name* for the webhook. 4. Provide the *URL* of the endpoint that will receive webhook notifications. 5. Select the *Events* that will trigger notifications. - You can choose any combination of supported [event types](#event-types). 6. Click *Create Webhook*. You're all set! Render starts sending webhook notifications to your specified endpoint whenever the selected events occur. ### 3. Define handling logic Your webhook endpoint can perform any logic you want in response to incoming notifications. This might include: - Logging notification payloads to a file or database - Triggering a CI/CD workflow - Sending a message to a chat platform To enable these and other actions, your application needs to properly parse and validate incoming webhook notifications as described in [Communication protocol](#communication-protocol). ## Communication protocol Render's webhook implementation follows the specification defined by the [Standard Webhooks project](https://github.com/standard-webhooks/standard-webhooks/blob/main/spec/standard-webhooks.md). The project provides a collection of [client libraries](https://www.standardwebhooks.com/#resources) in many languages to help you interact with webhook notifications. We recommend using these libraries to simplify your webhook implementation. ### Endpoint responses Whenever your [webhook endpoint](#1-set-up-an-https-endpoint) receives a notification request, it should respond with a 2xx-level HTTP status code within 15 seconds. If your endpoint takes longer to respond or returns any other status code, Render considers the delivery attempt to have failed and retries it (see [Delivery failures and retries](#delivery-failures-and-retries)). ### Request body The payload of each webhook notification request is a small JSON object with the following fields: ```json { "type": "deploy_ended", "timestamp": "2025-02-25T16:22:19.979294509Z", "data": { "id": "evt-cuuuses015js70180jk0", "serviceId": "srv-cukouhrtq21c73e9scng", "serviceName": "my-service", "status": "succeeded" // Only present for certain notification types } } ``` | Field | Description | |--------|--------| | `type` | The type of event that occurred. For supported values, see [Event types](#event-types). | | `timestamp` | The timestamp when the service event occurred, in ISO 8601 format. This is different from the value of the [`webhook-timestamp`](#webhook-timestamp) header, which indicates when the notification request was sent. | | `data.id` | The unique ID of the service event that triggered the notification. This value starts with `evt-`. This value is identical for all [retries](#delivery-failures-and-retries) of a given notification. You can use it to help ensure idempotency in your endpoint's logic. You also provide this value to the Render API's [Retrieve event](https://api-docs.render.com/reference/retrieve-event) endpoint to fetch additional details about the event. | | `data.serviceId` | The unique ID of the service that the event pertains to. | | `data.serviceName` | The name of the service that the event pertains to. | This "thin" payload format keeps notifications small, fast, and predictable. To obtain additional details specific to the event's type, see [Fetching full event details](#fetching-full-event-details). ### Request headers Each webhook notification request includes the following headers (example values shown): ```yaml webhook-id: evt-cv4cjhnnoe9s73c9l7s0 webhook-timestamp: 1741212102 webhook-signature: v1,XcslFHBlNT6cZYDOJVYUJGZMCNZgTArfO34vTJmjrj4= ``` ###### `webhook-id` The unique ID of the service event that triggered the notification. This value starts with `evt-`. This value is identical for all [retries](#delivery-failures-and-retries) of a given notification. You can use it to help ensure idempotency in your webhook handler. ###### `webhook-timestamp` The timestamp when the notification request was sent, as seconds since the Unix epoch. Use this value to verify that the notification was sent recently (such as within the last five minutes). The Standard Webhooks [client libraries](https://www.standardwebhooks.com/#resources) each provide a validation function that includes this check. This value is _not_ identical across retries. ###### `webhook-signature` A Render-generated signature that you can use to verify the authenticity of the notification. For details, see [Validating notifications](#validating-notifications). ### Delivery failures and retries If a webhook delivery fails (i.e., the endpoint doesn't respond with a 2xx-level status code within 15 seconds), Render retries it, up to a maximum of eight attempts per notification. After the third failure, Render sends you an email notification. Retries use exponential backoff, with the final attempt occurring approximately 33 hours after the first. > **If a webhook fails all delivery attempts for a given notification, Render disables the webhook.** > > Whenever this happens, Render again notifies you by email. After you resolve the underlying issue, you can reenable the webhook from its Settings page in the Render Dashboard. ### Validating notifications Render generates a signature for each webhook notification, which it includes in the request's [`webhook-signature` header](#webhook-signature). You can use this signature to verify that the notification was sent by Render and has not been tampered with. The Standard Webhooks project provides [client libraries](https://www.standardwebhooks.com/#resources) in many languages to help you validate webhook notifications, along with a helpful [verifier tool](https://www.standardwebhooks.com/verify). #### Signature format A webhook's signature is generated by providing the following string to the HMAC-SHA256 algorithm: ``` WEBHOOK_ID.WEBHOOK_TIMESTAMP.REQUEST_BODY.SIGNING_SECRET ``` In this string, the following values are separated by periods (`.`): - `WEBHOOK_ID`: The value of the request's `webhook-id` header - `WEBHOOK_TIMESTAMP`: The value of the request's `webhook-timestamp` header - `REQUEST_BODY`: The value of the request's body - `SIGNING_SECRET`: Your webhook's **signing secret**, which is provided on the webhook's Settings page in the Render Dashboard: [img] > **Keep your signing secret secure!** > > Don't publicly post your signing secret, commit it to version control, or otherwise share it outside your organization. > > **If you believe a signing secret has been compromised:** > > 1. [Create a _new_ webhook](#2-create-a-webhook) with the same settings as the compromised one. > 2. Update your webhook endpoint to perform validation using the new webhook's signing secret. > 3. Delete the compromised webhook. ## Fetching full event details The payload of a webhook notification includes only basic information, such as the event's type and unique ID: ```json { "type": "deploy_started", "timestamp": "2025-02-25T16:22:19.979294509Z", "data": { "id": "evt-cuuuses015js70180jk0", "serviceId": "srv-cukouhrtq21c73e9scng", "serviceName": "my-service" } } ``` You can fetch additional details specific to a given event with the Render API's [Retrieve event](https://api-docs.render.com/reference/retrieve-event) endpoint. The `details` object returned by this endpoint includes different fields depending on the provided event's type. For example, the response for an [`autoscaling_ended`](#autoscaling-ended) event includes a `fromInstances` field (the previous instance count) and a `toInstances` field (the new instance count): ```json { "id": "evt-cph1rs3idesc73a2b2mg", "timestamp": "2025-02-27T07:05:21.091Z", "serviceId": "srv-cukouhrtq21c73e9scng", "type": "autoscaling_ended", "details": { "fromInstances": 1, "toInstances": 2 } } ``` For details on the fields returned for each event type, see the [API reference](https://api-docs.render.com/reference/retrieve-event). ## Event types A given webhook can send notifications for any combination of supported event types. You specify which events trigger a notification during webhook creation, and you can update this selection at any time. In the Render Dashboard, event types are displayed in human-readable form (e.g., "Build Ended" instead of `build_ended`). ### Deployment lifecycle ###### `build_ended` A build completed for a service. This event's payload includes a `status` field that indicates whether the build `succeeded`, `failed`, or was `canceled`. ###### `build_started` A build started for a service. ###### `deploy_ended` A deploy completed for a service. This event's payload includes a `status` field that indicates whether the deploy `succeeded`, `failed`, or was `canceled`. ###### `deploy_started` A deploy started for a service. ###### `image_pull_failed` Render failed to pull a service's associated Docker image from its registry. This event is specific to [image-backed services](deploying-an-image). ###### `job_run_ended` The execution of a [one-off job](one-off-jobs) completed. This event's payload includes a `status` field that indicates whether the job `succeeded`, `failed`, or was `canceled`. ###### `pre_deploy_ended` A service's [pre-deploy command](deploys#pre-deploy-command) completed. ###### `pre_deploy_started` A service's [pre-deploy command](deploys#pre-deploy-command) started. ###### `commit_ignored` A service skipped automatic deployment for a particular Git commit based on its [commit message](deploys#skipping-an-auto-deploy). ###### `branch_deleted` A service's linked Git branch was deleted. This disables automatic deploys for the service until you link a new branch. ### Service availability ###### `maintenance_ended` A platform maintenance window ended for a service. ###### `maintenance_mode_enabled` User-initiated [maintenance mode](maintenance-mode) was enabled for a web service. ###### `maintenance_mode_uri_updated` The URL for a web service's [maintenance mode](maintenance-mode) page was updated. ###### `maintenance_started` A platform maintenance window started for a service. ###### `server_available` A previously unavailable service became available. ###### `server_failed` A service became unavailable, usually due to a runtime error. ###### `server_hardware_failure` A service became unavailable due to an underlying hardware failure. ###### `server_restarted` A service restarted. ###### `service_resumed` A previously suspended service resumed. ###### `service_suspended` A service was suspended. ###### `zero_downtime_redeploy_ended` A Render-initiated zero-downtime deploy completed for a service. ###### `zero_downtime_redeploy_started` A Render-initiated zero-downtime deploy started for a service. ### Scaling These event types pertain to [scaling](scaling) services, including [manual scaling](scaling#manual-scaling) and [autoscaling](scaling#autoscaling). ###### `instance_count_changed` A [manually scaled](scaling#manual-scaling) service's instance count was changed. This event does _not_ trigger for [autoscaled](scaling#autoscaling) services. ###### `autoscaling_ended` An [autoscaled](scaling#autoscaling) service finished adding or removing instances in response to load. ###### `autoscaling_started` An [autoscaled](scaling#autoscaling) service started adding or removing instances in response to load. ###### `autoscaling_config_changed` A service's [autoscaling](scaling#autoscaling) configuration changed (such as increasing or decreasing the maximum instance count). ### Service config ###### `plan_changed` A service's instance type changed. In the Render Dashboard only, this event is referred to as **Instance Type Changed**. In notifications, this event's name is `plan_changed`, _not_ `instance_type_changed`. ### Cron jobs These event types pertain to [cron jobs](cronjobs). ###### `cron_job_run_ended` A run of a cron job completed. This event's payload includes a `status` field that indicates whether the run `succeeded`, `failed`, or was `canceled`. ###### `cron_job_run_started` A run of a cron job started. ### Render Postgres These event types pertain to [Render Postgres](postgresql) databases. ###### `postgres_available` A previously unavailable Render Postgres instance became available. ###### `postgres_backup_completed` A [manually triggered export](postgresql-backups#trigger-a-backup) completed for a Render Postgres database. ###### `postgres_backup_failed` A [manually triggered export](postgresql-backups#trigger-a-backup) failed for a Render Postgres database. ###### `postgres_backup_started` A [manually triggered export](postgresql-backups#trigger-a-backup) started for a Render Postgres database. ###### `postgres_cluster_leader_changed` A [high availability](postgresql-high-availability) Render Postgres database failed over to its standby. ###### `postgres_created` A Render Postgres database was created. ###### `postgres_credentials_created` A new PostgreSQL user was created for a Render Postgres database. See [Managing Postgres Credentials](postgresql-credentials). ###### `postgres_credentials_deleted` A PostgreSQL user was deleted from a Render Postgres database. See [Managing Postgres Credentials](postgresql-credentials). ###### `postgres_disk_size_changed` The storage capacity of a Render Postgres database changed. ###### `postgres_ha_status_changed` [High availability](postgresql-high-availability) was toggled on or off for a Render Postgres database. ###### `postgres_pitr_checkpoint_completed` [Point-in-time recovery](postgresql-backups) completed its daily checkpoint for a Render Postgres database. ###### `postgres_pitr_checkpoint_failed` [Point-in-time recovery](postgresql-backups) failed its daily checkpoint for a Render Postgres database. ###### `postgres_pitr_checkpoint_started` [Point-in-time recovery](postgresql-backups) started its daily checkpoint for a Render Postgres database. ###### `postgres_restarted` A Render Postgres database restarted. ###### `postgres_restore_failed` A [point-in-time recovery](postgresql-backups) restore failed for a Render Postgres database. ###### `postgres_restore_succeeded` A [point-in-time recovery](postgresql-backups) restore succeeded for a Render Postgres database. ###### `postgres_unavailable` A Render Postgres database became unavailable. ###### `postgres_upgrade_failed` A PostgreSQL version upgrade failed. ###### `postgres_upgrade_started` A PostgreSQL version upgrade started. ###### `postgres_upgrade_succeeded` A PostgreSQL version upgrade completed successfully. ###### `postgres_read_replica_stale` A Render Postgres [read replica](postgresql-read-replicas) has stopped syncing with its primary instance. To resolve this, please [contact support](https://dashboard.render.com?contact-support) in the Render Dashboard. ###### `postgres_read_replicas_changed` The number of read replicas associated with a Render Postgres database changed. ###### `postgres_wal_archive_failed` [Point-in-time recovery](postgresql-backups) failed a WAL archive for a Render Postgres database. ###### `postgres_disk_autoscaling_enabled_changed` Storage autoscaling was toggled for a Render Postgres database. ### Render Key Value These event types pertain to [Render Key Value](key-value) instances. ###### `key_value_available` A Key Value instance became available. ###### `key_value_config_restart` A Key Value instance restarted. ###### `key_value_unhealthy` A Key Value instance became unhealthy. ### Persistent disks These event types pertain to [persistent disks](disks) attached to services. ###### `disk_created` A new [persistent disk](disks) was added to a service. ###### `disk_updated` A service's [persistent disk](disks) configuration was updated. ###### `disk_deleted` A service's [persistent disk](disks) was deleted. ## History of webhook event changes | Date | Change | |--------|--------| | `2025-11-20` | Added the [`postgres_credentials_created`](#postgres-credentials-created) and [`postgres_credentials_deleted`](#postgres-credentials-deleted) event types. `2025-11-10` | - Added the [`serviceName`](#dataservicename) field to all event payloads. - Added the [`status`](#datastatus) field to payloads for the following event types: - [`build_ended`](#build-ended) - [`deploy_ended`](#deploy-ended) - [`cron_job_run_ended`](#cron-job-run-ended) - [`job_run_ended`](#job-run-ended) - Removed the `server_unhealthy` event type. - Use the [`server_failed`](#server-failed) event type instead. | | `2025-10-30` | Added the [`postgres_disk_autoscaling_enabled_changed`](#postgres-disk-autoscaling-enabled-changed) event type. | | `2025-08-05` | Added the [`postgres_wal_archive_failed`](#postgres-wal-archive-failed) event type. | | `2025-05-30` | Added the [`postgres_restore_failed`](#postgres-restore-failed) and [`postgres_restore_succeeded`](#postgres-restore-succeeded) event types. | | `2025-05-19` | Added the [`postgres_read_replica_stale`](#postgres-read-replica-stale) event type. | | `2025-05-06` | Added the following event types: - [`postgres_backup_failed`](#postgres-backup-failed) - [`postgres_pitr_checkpoint_completed`](#postgres-pitr-checkpoint-completed) - [`postgres_pitr_checkpoint_failed`](#postgres-pitr-checkpoint-failed) - [`postgres_pitr_checkpoint_started`](#postgres-pitr-checkpoint-started) | | `2025-03-11` | Added initial set of [event types](#event-types). | # Email and Slack Notifications Render can notify you via email and/or Slack when certain events occur (such as when your service's deploy fails). You can [set workspace-level defaults](#setting-workspace-defaults) for notifications, and you can also [customize notifications](#customizing-per-service) for individual services. > *Want to trigger custom workflows from a wide variety of platform events?* > > See [Webhooks](webhooks). ## Supported notifications Render can notify you of the following events, depending on which notification level you set (*Only failure notifications* or *All notifications*): | Event | Minimum Notification Level | |--------|--------| | A service build or deploy fails. | Only failure notifications | | A Docker [image pull](deploying-an-image) fails. | Only failure notifications | | A [cron job](cronjobs) execution fails. | Only failure notifications | | A [one-off job](one-off-jobs) execution fails. | Only failure notifications | | A running service becomes unhealthy. | Only failure notifications | | A deploy successfully goes live. | All notifications | | An unhealthy service becomes healthy. | All notifications | To request notification support for additional events, please [submit a feature request](https://feedback.render.com/features). ## Setting workspace defaults From your workspace home in the [Render Dashboard][dboard], click *Integrations > Notifications* in the left pane: [img] From here, you can configure the following: | Setting | Description | |--------|--------| | *Notification Destination* | Receive notifications via *Email*, *Slack*, or both. To receive via Slack, you must first [connect your Slack workspace](#connecting-to-slack). | | *Default Service Notifications* | Specifies which [supported notifications](#supported-notifications) Render sends for your services. Options include: - *Only failure notifications.* Render sends notifications only for failures (includes failed deploys, cron jobs, and running services). - *All notifications.* Render sends _all_ supported notifications, including for successful deploys. - *None.* Render does not send any notifications. | | *Preview Notifications* | If *Enabled*, Render sends the same set of notifications for a [service preview](service-previews) or [preview environment](preview-environments) that it does for the preview's base service. This setting requires a *Professional* workspace or higher. | ### Connecting to Slack From your workspace's *Integrations > Notifications* page, click *Connect Slack* under the *Notification Destination* setting: [img] Proceed through the authorization flow to connect your Slack account. ## Customizing per service You can customize notification settings for an individual service. Doing so overrides your workspace's [default notification settings](#setting-workspace-defaults) for that service. In the [Render Dashboard][dboard], go to your service's *Settings* page and scroll down to *Notifications*: [img] For any setting, choose a value besides *Use workspace default* to customize the service's notification behavior. After you customize notification settings for a service, that service appears in your workspace's *Integrations > Notifications* page, under *Notification Overrides*: [img] # Service Metrics View any service's usage metrics from its *Metrics* page in the [Render Dashboard][dboard]: [img] Use these metrics in combination with your service's [logs](logging) to help diagnose issues as they arise. > *Want to stream OpenTelemetry metrics to your observability provider?* > > See [Streaming Render Service Metrics](metrics-streams). ## Available metrics Depending on your service's type, the *Metrics* page shows graphs for one or more of the following: | Metric(s) | Which services? | |--------|--------| | [*CPU and memory usage*](#cpu-and-memory-usage) | All services except [static sites](static-sites) | | [*Disk storage*](#disk-storage) | All services with persistent storage, including: - [Render Postgres](postgresql) databases - [Render Key Value](key-value) instances (only the *Disk Activity* graph) - Services with an attached [persistent disk](disks) | | [*HTTP requests*](#http-requests) | [Web services](web-services) only Some features of these graphs require a [*Professional* workspace](professional-features) or higher. | | [*Outbound bandwidth*](#outbound-bandwidth) | All service types | ### CPU and memory usage Your service's Metrics page displays CPU and memory usage in the *Application Metrics* section: [img] Use the controls at the top of the section to customize these graphs: - If you've [scaled](scaling) your service, you can view metrics for all its instances, or for any subset. - When viewing metric values for multiple instances, you can aggregate those values into a _single_ value. - The aggregate value can use the minimum, maximum, or average value across your selected instances. - You can view each metric as its actual value (such as 500 MB of memory), or as a percentage of the maximum allowed value for your service's [instance type](pricing#services). ### Disk storage The Metrics page shows disk-related metrics for the following services: - [Render Postgres](postgresql) databases - [Render Key Value](key-value) instances - Key Value instances only show the *Disk Activity* graph. - Services with an attached [persistent disk](disks) - Web services, private services, and background workers support attaching a persistent disk. Disk-related metrics include: | Metric | Description | |--------|--------| | *Disk Usage* | The amount of disk space used by your service. This helps you identify when you're approaching your instance's current storage limit. | | *Disk Activity* | The amount of data your service has read from and written to disk. [Free Key Value](free#free-key-value) instances _don't_ display this metric, because they don't persist data to disk. | | *Disk Operations* | The number of read and write operations your service has performed on its disk. | ### HTTP requests > Certain features of HTTP request metrics require a [*Professional* workspace](professional-features) or higher. The Metrics page for a [web service](web-services) shows graphs for HTTP request volume and response latency in the *Network Metrics* section. Note that these graphs show metrics only for requests from the public internet—they _don't_ include requests over your [private network](private-network). #### Request volume The *Total Requests* graph shows your web service's HTTP request volume over your selected time range: [img] Use the controls at the top of the section to customize this graph: - You can filter the graph to include only requests that returned a particular HTTP status code. - You can group each bar in the graph by the HTTP status code returned for those requests. - Both of these controls can help you identify time periods that had a high error rate. Teams can perform additional customizations: - Teams can filter the graph to include only requests that were sent to a particular host (i.e., domain) or path. - Teams can group each bar in the graph by which host each request was sent to. #### Response latency > Response latency metrics require a [*Professional* workspace](professional-features) or higher. The *Response Times* graph shows your web service's response latency for common helpful percentiles (p50, p75, p90, and p99): [img] Click the *Percentile* dropdown to display only a specific percentile. ### Outbound bandwidth The Metrics page shows your service's recent [outbound bandwidth](outbound-bandwidth) usage under the *Network Metrics* section: [img] This graph displays up to four categories of outbound bandwidth, depending on your service's type and its recent network activity: | Category | Description | |--------|--------| | *HTTP Responses* | [Web services](web-services) and [static sites](static-sites) only (other service types can't receive HTTP requests over the public internet) Your service's responses to HTTP requests initiated by browsers and other clients over the public internet. | | *WebSocket Responses* | [Web services](web-services) only (other service types can't receive WebSocket connections over the public internet) Your service's responses to WebSocket connections initiated by browsers and other clients over the public internet. | | *Service-Initiated* | All service types. Traffic initiated by your service to any destination over the public internet (e.g., connecting to a third-party API). Includes all protocols (HTTP, WebSocket, etc.). | | *Service-Initiated (Private Link)* | All service types. Traffic initiated by your service to any destination over a [private link connection](private-network#integrating-with-aws-privatelink). Includes all protocols (HTTP, WebSocket, etc.). | Note the following: - This graph's resolution is fixed at one data point per hour. Each point represents the amount of outbound bandwidth used during the previous hour. - Each new data point becomes available approximately 60 minutes after its measurement window ends. - This graph might display very small data point values (less than 1 MB) as 0. - You can customize this graph's time range, but it doesn't support any other filters. ### Database activity The Metrics page for a [Render Postgres](postgresql) database includes the following database-specific metrics: | Metric | Description | |--------|--------| | *Active Connections* | The number of open connections to your database from all connecting clients. This graph is also available for [Render Key Value](key-value) instances. | | *Network Activity* | The amount of data your database has read from and written to the network. | | *Transaction Volume* | The number of transactions executed by your database. | | *Replication Lag* | The amount of time your primary database takes to sync changes to any [read replicas](postgresql-read-replicas). This graph appears only if your database has at least one read replica. | | *Lock-Delayed Queries* | The number of recently completed database queries that were delayed by another operation holding a lock for one second or longer. Queries appear on this graph _after_ they've completed. They do not appear while they're still waiting on a lock. | | *Running Processes* | Click the *Queries* tab at the top of your database's Metrics page to view a table of processes that are currently running on your database. Most of these processes correspond to a client connection. Processes with the status `idle` are not actively executing a query. In this case, the table's *Duration* column shows the execution time of the process's most recently completed query. | | *Top Queries* | Click the *Queries* tab at the top of your database's Metrics page to view a table of the queries that have been executed most frequently on your database. | ## Metrics retention period Your metrics retention period depends on your workspace's plan (see the [pricing page](pricing)): | Workspace Plan | Retention Period | | ------------------------- | ---------------- | | Hobby | 7 days | | Professional | 14 days | | Organization / Enterprise | 30 days | # Streaming Render Service Metrics Workspaces with a *Professional* plan or higher can push a variety of service metrics (memory usage, disk capacity, etc.) to an [OpenTelemetry](https://opentelemetry.io/)-compatible observability provider, such as New Relic, Honeycomb, or Grafana. [img] > *Render does not emit metrics for the following:* > > - [Static sites](static-sites) > - [Cron jobs](cronjobs) > - [One-off jobs](one-off-jobs) ## General setup The following steps must be performed by a workspace [admin](team-members#member-roles): 1. From your workspace's home in the [Render Dashboard][dboard], select *Integrations > Observability* in the left sidebar: [img] 2. Under *Metrics Stream*, click *+ Add destination*. The following dialog appears: [img] 3. Select your observability provider from the dropdown. The dialog updates to display fields specific to your provider. > If your provider isn't listed, select *Custom*. [Learn how to connect a custom provider.](#other-providers-custom) 4. Fill in the provider-specific fields. - See instructions for your provider [below](#provider-specific-config). 5. Click *Add destination*. You're all set! Your provider will start receiving [reported metrics](#reported-metrics) from Render shortly. ## Provider-specific config When creating a metrics stream for your Render workspace, you provide different information depending on your observability provider: [img] See details for each supported provider below, along with instructions for [other providers](#other-providers-custom). Please also consult your provider's documentation for additional information. > If there’s a provider you’d like us to add to this list, please submit a [feature request](https://feedback.render.com). ### New Relic For *Region*, select *US* or *EU* according to where your New Relic data is hosted. For *License key*, create a new key with the following steps: 1. From your New Relic [API keys page](https://one.newrelic.com/api-keys), click *Create a key*. The following dialog appears: [img] 2. For the *Key type*, select *Ingest - License*. 3. Add a descriptive *Name* (e.g., "Render Metrics Integration"). 4. Click *Create Key*. ### Honeycomb For *Region*, select *US* or *EU* according to where your Honeycomb data is hosted. For *API key*, create a new key with the following steps: 1. In your Honeycomb dashboard, hover over *Manage Data* on the bottom left and click *Send Data*: [img] 2. Click *Manage API keys*. 3. Click *Create Ingest API Key*. The following dialog appears: [img] 4. Add a descriptive *Name* (e.g., "Render Metrics Integration"). 5. Make sure *Can create services/datasets* is enabled. 6. Click *Create*. ### Grafana Obtain both your *Endpoint* and *API Token* with the following steps: 1. From your Grafana Cloud Portal (`grafana.com/orgs/[your-org-name]`), click *Details* for the Grafana stack you want to use: [img] 2. Find the *OpenTelemetry* tile and click *Configure*. 3. Copy the value of *Endpoint for sending OTLP signals* (this is your *Endpoint*). 4. Under *Password / API Token*, click *Generate now*. 5. Add a token name (e.g., `render_metrics_integration`). 6. Click *Create Token*. 7. Copy the generated value starting with `glc_` (this is your *API Token*). For more details, see the [Grafana documentation](https://grafana.com/docs/grafana-cloud/send-data/otlp/send-data-otlp/#manual-opentelemetry-setup-for-advanced-users). ### Datadog > To simplify metrics ingestion with Datadog, Render pushes metrics in Datadog's native format instead of using OpenTelemetry. Specify your *Datadog site*, according to where your Datadog data is hosted. For *API key*, generate a new organization-level API key from your [organization settings page](https://app.datadoghq.com/organization-settings/api-keys). You _cannot_ use an application key or a user-scoped API key. ### Better Stack Obtain both your *Ingesting host* and *Source token* with the following steps: 1. From your *Telemetry > Sources* page in Better Stack, click *Connect source*. The following page appears: [img] 2. Add a descriptive *Name* (e.g., "Render Metrics Integration"). 3. Select *OpenTelmetry* as the *Platform*. 4. Click *Connect source*. Better Stack creates the new source and redirects you to its details page. 5. Copy your source's *Ingesting host* URL and *Source token*. ### Other providers (custom) > Consult this section only if your observability provider isn't listed above. Render can push service metrics to your OpenTelemetry-compatible endpoint, _if_ that endpoint authenticates requests via an API key provided as a bearer token in an `Authorization` header. *If your provider's endpoint supports authentication via bearer token:* 1. Consult your provider's documentation to obtain your OpenTelemetry endpoint and API key. 2. Specify *Custom* as your provider in the metrics stream creation dialog, then provide your endpoint and API key in the corresponding fields. *If your provider's endpoint requires a different authentication method:* 1. Please [submit a feature request](https://feedback.render.com) to let us know about your provider's requirements. 2. You can spin up your own OpenTelemetry collector (such as the official [vendor-agnostic implementation](https://github.com/open-telemetry/opentelemetry-collector)). Your collector's endpoint can receive metrics from Render, then transform and forward them to your provider using whatever authentication method it expects. ## Reported metrics Render streams service metrics that pertain to the following categories: - [CPU](#cpu) - [Memory](#memory) - [HTTP requests](#http-requests) - [Data storage](#data-storage) All metrics use OpenTelemetry JSON format. The first component of each metric's name is `render` (e.g., `render.service.memory.usage`). > *Some observability providers transform metric names to match their conventions.* > > For example, Grafana converts the metric `render.service.memory.usage` to `render_service_memory_usage_bytes`. > > After you set up your metrics stream, inspect incoming data in your provider's dashboard to verify how it identifies Render metrics. See names, descriptions, and included properties for each reported metric below. ### Universal properties All reported metrics include the following properties: | Property | Description | |--------|--------| | `service.name` | The name of the service (e.g., `my-service`). Grafana displays this property as `job`. | | `service.id` | The ID of the service (e.g., `srv-abc123`). | | `service.instance.id` | For _most_ metrics, this is the ID of the metric's associated service instance (e.g., `srv-abc123-def456`). This is _not_ the case for [HTTP request metrics](#http-requests). Everything before the final hyphen is the service ID (`srv-abc123`), and the final component (`def456`) uniquely identifies the instance. This value enables you to segment metrics by individual instances of a [scaled service](scaling), and to identify when a service's instances are cycled as part of a redeploy. | The following properties are also universal but optional: | Property | Description | |--------|--------| | `service.project` | The name of the service's associated [project](projects), if it belongs to one (otherwise omitted). | | `service.environment` | The name of the service's associated [environment](projects), if it belongs to one (otherwise omitted). | ### CPU These metrics apply to all compute instances and datastores. ###### `render.service.cpu.limit` The maximum amount of CPU available to a particular service instance (as determined by its instance type). Includes [universal properties](#universal-properties) only. ###### `render.service.cpu.time` The cumulative amount of CPU time used by a particular service instance, in seconds. To visualize changes to CPU load over time, apply a `rate()` function or similar in your observability provider. Includes [universal properties](#universal-properties) only. ### Memory These metrics apply to all compute instances and datastores. ###### `render.service.memory.limit` The maximum amount of memory available to a particular service instance (as determined by its instance type), in bytes. Includes [universal properties](#universal-properties) only. ###### `render.service.memory.usage` The amount of memory that a particular service instance is currently using, in bytes. Includes [universal properties](#universal-properties) only. ### HTTP requests These metrics apply only to [web services](web-services). > *HTTP request metrics are not reported per instance.* > > Render aggregates these metrics across all instances of a given web service. For these metrics, the value of [`service.instance.id`](#serviceinstanceid) matches that of [`service.id`](#serviceid). ###### `render.service.http.requests.total` The cumulative number of HTTP requests that a given service has received _across all instances_, segmented by the properties below. To visualize changes to request load over time, apply a `rate()` function or similar in your observability provider. Includes [universal properties](#universal-properties), along with the following: | Property | Description | |--------|--------| | `host` | The destination domain for incoming requests. This can be your service's `onrender.com` domain or any [custom domain](custom-domains) you've added. | | `status_code` | The HTTP status code returned by the service (`200`, `404`, and so on). | ###### `render.service.http.response.latency` Provides a particular web service's p50, p95, or p99 response time, segmented by the properties below. Includes [universal properties](#universal-properties), along with the following: | Property | Description | |--------|--------| | `quantile` | Indicates the percentile of the provided latency value. One of the following: - `0.50` (p50) - `0.95` (p90) - `0.99` (p99) | | `host` | The destination domain for incoming requests. This can be your service's `onrender.com` domain or any [custom domain](custom-domains) you've added. | | `status_code` | The HTTP status code returned by the service instance (`200`, `404`, and so on). | ### Data storage Each of these metrics applies to one or more of [Render Postgres](postgresql), [Render Key Value](key-value), and [persistent disks](disks). ###### `render.service.disk.capacity` The total capacity of a service's persistent storage, in bytes. Applies to [Render Postgres](postgresql) databases and [persistent disks](disks). Includes [universal properties](#universal-properties) only. ###### `render.service.disk.usage` The amount of _occupied_ persistent storage for a service, in bytes. Applies to [Render Postgres](postgresql) databases and [persistent disks](disks). Includes [universal properties](#universal-properties) only. ###### `render.keyvalue.connections` The number of active connections to a particular Render Key Value instance. Includes [universal properties](#universal-properties) only. ###### `render.postgres.connections` The number of active connections to a particular Render Postgres instance. Includes [universal properties](#universal-properties), along with the following: | Property | Description | |--------|--------| | `database_name` | The name of the PostgreSQL database created in the instance (e.g., `my_db_abcd`). This value is helpful if your Render Postgres instance hosts [multiple databases](postgresql-creating-connecting#adding-multiple-databases-to-a-single-instance). This value usually does _not_ match the value of `service.name`. | ###### `render.postgres.replication.lag` The delay for a particular Render Postgres instance replicating changes to its [read replica](postgresql-read-replicas) (if it has one), in milliseconds. Includes [universal properties](#universal-properties) only. ### History of changes to reported metrics | Date | Change | |--------|--------| | `2025-03-11` | Added initial set of [reported metrics](#reported-metrics). | # Logs in the Render Dashboard Use Render's log explorer to view and search recent logs generated by your service. The explorer is available from your service's *Logs* page in the [Render Dashboard][dboard]: [img] In addition to searching for a particular string, you can filter by details like log level, instance, and time range. When you identify a log line of interest, mouse over it and click *View in context* to jump to its location in the full log history: [img] If you have a [*Professional* workspace](professional-features) or higher, the explorer also displays [HTTP request logs](#http-request-logs) for your web services. Separately, you can view logs for any recent [deploy or one-off job](#logs-for-an-individual-deploy-or-job). Note that Render does not emit logs for [static sites](static-sites). > *Want to stream logs to your syslog-compatible observability provider?* > > See [Streaming Render Service Logs](log-streams). ## Log line format Log lines in the explorer display the following information: [img] | Component | Description | |--------|--------| | *Timestamp* | The time the log was generated, in your local time zone. Mouse over the timestamp to view it in UTC and Unix formats. | | *Level* | The log level, such as `debug`, `warning`, or `error`. Mouse over the icon to view the log level as text. | | *Instance* | A string that uniquely identifies the service instance that generated the log. Click this value to add it as a search filter. [HTTP request logs](#http-request-logs) are aggregated at the service level (not the individual instance level), so they do not display this value. | | *Message* | The logged message. [HTTP request logs](#http-request-logs) instead display the details for the corresponding HTTP request, such as: - HTTP method - Status code - Requested URL | ## Log filters When searching with the log explorer, you can filter results by the following (in addition to searching for an arbitrary string): | Filter | Description | |--------|--------| | Time range | You can limit results to a predefined range (such as *Last 24 hours*), specify a _custom_ range, or select *Live tail* to view a live feed of recent logs. Available ranges depend on your workspace's [log retention period](#retention-period). Specify using the dropdown in the upper right of the log explorer. | | `level` | The log level, such as `debug`, `warning`, or `error`. Specify in the search box. | | `instance` | The ID of the service instance that generated the log. Helpful for filtering logs for a [scaled service](scaling), or for observing an instance swap during a [zero-downtime deploy](deploys#zero-downtime-deploys). Specify in the search box. You can also click the instance ID for any log line to add it as a filter. | | `method` | *[HTTP request logs](#http-request-logs) only.* The HTTP method of a particular request (such as `GET` or `POST`). Specify in the search box. | | `status_code` | *[HTTP request logs](#http-request-logs) only.* The response code for a particular request (such as `200`, `404`, or `500`). Specify in the search box. | | `host` | *[HTTP request logs](#http-request-logs) only.* The originating domain of a particular request (such as `my-web-service.onrender.com`). Specify in the search box. | ## Wildcards and regular expressions The log explorer supports searching with wildcards and regular expressions. To match any number of characters, use the wildcard token (`*`). To match against a regular expression, enclose your search in forward slashes (`/`). You can then use any metacharacters supported by the [RE2 syntax](https://github.com/google/re2/wiki/Syntax). You can use wildcards and regular expressions in search strings and in filters. See the table below for some useful examples. | Search | Description | |--------|--------| | `foo*bar` | Returns logs that contain `foo` followed by `bar` using wildcard search. | | `/foo.*bar/` | Returns logs that contain `foo` followed by `bar` using a regular expression. | | `/(foo\|bar)/` | Returns logs that contain `foo` or `bar`. | | `status_code:/4../` | Returns request logs with a `4xx` status code. | | `method:/(GET\|POST)/` | Returns request logs with a `GET` or `POST` method. | | `path:api/resource/*/subresource` | Returns request logs with a path that starts with `api/resource/` and ends with `/subresource`. | | `/responseTimeMS=\d{3}\d+/` | Returns request logs with a response time greater than one second. | ## Keyboard shortcuts The log explorer supports these keyboard shortcuts: | Action | Shortcut | |--------|--------| | Focus search bar | `/` | | Enable fullscreen | `M` | | Exit fullscreen | `M` or `Esc` | | Scroll (slow) | `Arrow Up` / `Arrow Down` | | Scroll (fast) | `Page Up` / `Page Down` | | Jump to top | `Home` | | Jump to bottom | `End` | | Copy all currently displayed logs | `CMD+Shift+C` (macOS) `CTRL+Shift+C` (Windows/Linux) | | Clear logs (live tail view only) | `CMD+Shift+L` (macOS) `CTRL+Shift+L` (Windows/Linux) | ## HTTP request logs If you have a [*Professional* workspace](professional-features) or higher, Render generates a log entry for each HTTP request to your team's web services from the public internet: [img] This helps you debug unexpected behavior for a request, in particular by tracing its execution via the [`requestID` field](#tracing-with-requestid-and-rndr-id). HTTP request logs appear alongside application logs in the explorer, and they support additional [filters](#log-filters) (such as `method` and `status_code`). > Render does _not_ generate request logs for HTTP requests sent from other services over your [private network](private-network)—only for requests sent to web services over the public internet. ### Tracing with `requestID` and `Rndr-Id` In each [HTTP request log entry](#http-request-logs), the value of the `requestID` field uniquely identifies the associated request: ``` Nov 2 2:47:04 PM [GET] 400 clientIP="34.105.23.229" requestID="542c7b8b-c833-4b3c" ... ``` Render includes this same value in the `Rndr-Id` HTTP header—both in the request to your web service _and_ in the response to the requesting client: ```http Rndr-Id: 542c7b8b-c833-4b3c ``` In your web service's code, you can extract this value from the header and include it in every log you generate for a given request. If you do, you can search for this ID in the log explorer to view the corresponding request's chronological log history. On the client's side, here's what a `Rndr-Id` looks like in Chrome's Network panel: [img] By tracing each phase of the request lifecycle with one consistent ID, you can more quickly diagnose and debug issues in collaboration with the users who encounter them. ## Logs for an individual deploy or job View the logs for an individual deploy of your service from the service's *Events* page. Click the word *Deploy* in a timeline entry to open the log viewer: [img]

⬇️

[img] Similarly, you can view logs for the execution of a [one-off job](one-off-jobs) from the associated service's *Jobs* page. ## Log limits ### Retention period Render's log retention period depends on your workspace's plan (see the [pricing page](pricing)): | Workspace Plan | Retention Period | | ------------------------- | ---------------- | | Hobby | 7 days | | Professional | 14 days | | Organization / Enterprise | 30 days | Logs older than your current rentention period are no longer available, even if you upgrade your plan to extend the period. If you need to retain logs for a longer period, you can [stream your logs to a syslog-compatible provider](log-streams). ### Rate limit Render processes a maximum of 6,000 application-generated log lines per minute for each running instance of a service. If an instance generates logs in excess of this limit, Render drops the excess log lines. Dropped log lines don't appear in the log explorer or in [log streams](log-streams). # Streaming Render Service Logs You can stream the logs generated by your Render services to any logging provider with a TLS-enabled [syslog](https://en.wikipedia.org/wiki/Syslog) endpoint, such as Datadog or Sumo Logic. This includes [HTTP request logs](logging#http-request-logs) for *Professional* workspaces and higher. After you set a default stream destination for your workspace, all of your supported services start streaming their logs to that destination. You can [override this](#overriding-defaults) for individual services. > Render does not emit logs for [static sites](static-sites). ## Setup 1. From your workspace home in the [Render Dashboard][dboard], click *Integrations > Observability* in the left pane. 2. Scroll down to the *Log Streams* section: [img] 3. Under *Default destination*, click *+ Set default*. The following dialog appears: [img] 4. Provide your syslog endpoint URL in the *Log Endpoint* field. - Use the format `HOST:PORT` (for example, `logs.papertrailapp.com:34302`). - For help finding the endpoint URL with common providers, [see below](#finding-your-syslog-endpoint). 5. If Render needs to include an authentication token with all reported logs, provide it in the *Token* field. - This is required for logging providers that use a single syslog endpoint for multiple users, such as Datadog. 6. Click *Save Changes*. 7. Toggle *Include logs from preview instances* to configure whether your log stream includes logs from your [service previews](service-previews) and [preview environments](preview-environments). You're all set! Logs from Render will start to appear in your provider's feed shortly. ## Overriding defaults [*Professional* workspaces](professional-features) and higher can override log stream settings for individual services: | Custom Setting | Hobby | Professional | Organization / Enterprise | |--------|--------|--------|--------| | Omit individual services from log stream | ❌ | 🟢 | 🟢 | | Set a custom destination for individual services | ❌ | ❌ | 🟢 | 1. In the [Render Dashboard][dboard], open the Settings page for the service you want to override and scroll down to the *Log Stream* section: [img] 2. Open the *•••* menu and click *Override*. The following dialog appears: [img] 3. Select *Forward to a different destination* or *Don't forward this service's logs*. - Forwarding to a different destination requires an Organization workspace or higher. 4. Provide any necessary details for the selected option and click *Save override*. You're all set! Your service now uses its own custom log stream settings: [img] You can revert this custom configuration by clicking *Reset to default*. ## Reporting format Render streams logs to your provider's syslog endpoint over TCP. Log lines are formatted according to [RFC5424](https://tools.ietf.org/html/rfc5424), which is supported by most popular providers. Log streams do _not_ support: - Insecure (non-TLS-enabled) endpoints - Providers that require a custom log format > If you encounter issues integrating with a syslog-compatible provider, please let us know at *support@render.com*. A formatted log line looks like this: ``` <0>1 2021-03-31T16:00:00-08:00 test-service cron-12345 74440 cron-12345 - hello this is a test ``` Render annotates each log line with: - The corresponding service's slug - The type of service (`web`, `cron`, etc.) - A unique identifier for the instance - Use this value to track your service between deploys, or to distinguish between multiple instances if you're running more than one. If you're using a standard format like `logfmt` or `json`, Render maps the `level` field to an appropriate syslog priority. Otherwise, Render makes a best effort to parse log levels, defaulting to `INFO`. ## Finding your syslog endpoint Consult your logging provider's documentation to obtain your syslog endpoint and any necessary token. Instructions for certain providers are also available below. > If there’s a logging provider you’d us to add to this list, please submit a [feature request](https://feedback.render.com). ### Better Stack (previously Logtail) Create a new source in [Better Stack Logs](https://logs.betterstack.com/) with the platform `Render`: [img] Then, when adding your log stream in Render: - Provide `in.logs.betterstack.com:6514` as the **Log Endpoint**. - Provide the **source token** from Better Stack as the **Token**. For more information, see the [Better Stack documentation](https://betterstack.com/docs/logs/render). ### Datadog See [this section](datadog#streaming-service-logs-and-metrics). ### highlight.io To stream logs to your existing highlight.io project: - Provide `syslog.highlight.io:34302` as the **Log Endpoint**. - Provide your highlight project ID as the **Token**. - Your highlight project ID is shown in the top left of your project page. For more information, see the [highlight.io documentation](https://www.highlight.io/docs/getting-started/backend-logging/hosting/render). ### Mezmo (previously LogDNA) > We've observed high rates of connection failures with Mezmo's syslog endpoint and do not recommend using them with Render at this time. Log in to your Mezmo account and navigate to the [sources page](https://app.logdna.com/pages/add-source). Select **syslog** on the left sidebar to see your syslog endpoint. [img] ### Papertrail Log in to your account and navigate to the [setup page](https://papertrailapp.com/systems/setup?type=system&platform=unix#unix-manual) to find your Syslog endpoint: [img] If you use the same Papertrail account to collect logs from multiple providers, you can optionally [generate a unique endpoint for your Render services](https://papertrailapp.com/destinations/new). ### SolarWinds Follow the instructions for [sending logs using syslog](https://documentation.solarwinds.com/en/success_center/observability/content/configure/configure-logs-syslog.htm). - Set the **Log Endpoint** to your organization's syslog collector endpoint. This endpoint has the format `syslog.collector.xx-yy.cloud.solarwinds.com:6514`, where `xx-yy` represents the data center your organization uses. See [Data centers and endpoint URIs](https://documentation.solarwinds.com/en/success_center/observability/content/system_requirements/endpoints.htm) to find the exact URL. - Provide your API ingestion token as the *Token*. - Your API ingestion token is found in the Token field. ### Sumo Logic Follow the instructions for [configuring a cloud syslog source](https://help.sumologic.com/docs/send-data/hosted-collectors/cloud-syslog-source/#configure-a-cloudsyslogsource). After you configure your source, Sumo Logic displays a modal with a *Token* and *Host*. Use these for your log stream's *Token* and *Log Endpoint*, respectively. # The Render CLI Use the Render CLI to manage your Render services and datastores directly from your terminal: [video] Among many other capabilities, the CLI supports: - Triggering service deploys, restarts, and one-off jobs - Opening a psql session to your database - Viewing and filtering live service logs The CLI also supports [non-interactive use](#non-interactive-mode) in scripts and CI/CD. > Please submit bugs and feature requests on the CLI's [public GitHub repository](https://github.com/render-oss/cli). ## Setup ### 1. Install **Homebrew** Run the following commands: ```shell brew update brew install render ``` **Linux/MacOS** Run the following command: ```shell curl -fsSL https://raw.githubusercontent.com/render-oss/cli/refs/heads/main/bin/install.sh | sh ``` **Direct download** 1. Open the CLI's [GitHub releases page](https://github.com/render-oss/cli/releases/). 2. Download the executable that corresponds to your system's architecture. If you use an architecture besides those provided, you can build from source instead. **Build from source** > We recommend building from source only if no other installation method works for your system. 1. [Install the Go programming language](https://golang.org/doc/install) if you haven't already. 2. Clone and build the CLI project with the following commands: ```shell git clone git@github.com:render-oss/cli.git cd cli go build -o render ``` After installation completes, open a new terminal tab and run `render` with no arguments to confirm. ### 2. Log in The Render CLI uses a **CLI token** to authenticate with the Render platform. Generate a token with the following steps: 1. Run the following command: ```shell render login ``` Your browser opens a confirmation page in the Render Dashboard. 2. Click **Generate token**. The CLI saves the generated token to its [local configuration file](#local-config). 3. When you see the success message in your browser, close the tab and return to your terminal. 4. The CLI prompts you to set your active workspace. You can switch workspaces at any time with `render workspace set`. You're ready to go! ## Common commands > **This is not an exhaustive list of commands.** > > - Run `render` with no arguments for a list of all available commands. > - Run `render help ` for details about a specific command. | Command | Description | |--------|--------| | `login` | Opens your browser to authorize the Render CLI for your account. Authorizing generates a CLI token that's saved locally. If the CLI already has a valid CLI token or [API key](#1-authenticate-via-api-key), this command instead exits with a zero status. | | `workspace set` | Sets the CLI's active workspace. CLI commands always operate on the active workspace. | | `services` | Lists all services and datastores in the active workspace. Select a service to perform actions like deploying, viewing logs, or opening an SSH/psql session. | | `deploys list`
`[SERVICE_ID]` | Lists deploys for the specified service. Select a deploy to view its logs or open its details in the Render Dashboard. If you don't provide a service ID in interactive mode, the CLI prompts you to select a service. | | `deploys create`
`[SERVICE_ID]` | Triggers a deploy for the specified service. If you don't provide a service ID in interactive mode, the CLI prompts you to select a service. In [non-interactive mode](#non-interactive-mode), helpful options include: - `--wait` to block until the deploy completes (a failed deploy exits with a non-zero status) - `--commit [SHA]` to deploy a specific commit (Git-backed services only) - `--image [URL]` to deploy a specific Docker image tag or digest (image-backed services only) | | `psql`
`[DATABASE_ID]` | Opens a psql session to the specified PostgreSQL database. If you don't provide a database ID in interactive mode, the CLI prompts you to select a database. | | `ssh`
`[SERVICE_ID]` | Opens an SSH session to a running instance of the specified service. If you don't provide a service ID in interactive mode, the CLI prompts you to select a service. | ## Non-interactive mode By default, the Render CLI uses interactive, menu-based navigation. This default is great for manual use, but not for scripting or automation. Configure the CLI for non-interactive use in CI/CD and other automated environments with the following steps: ### 1. Authenticate via API key The Render CLI can authenticate using an API key instead of [`render login`](#2-log-in). Unlike CLI tokens, API keys do not periodically expire. For security, use this authentication method only for automated environments. 1. Generate an API key with [these steps](api#1-create-an-api-key). 2. In your automation's environment, set the `RENDER_API_KEY` environment variable to your API key: ```bash export RENDER_API_KEY=rnd_RUExip… ``` > If you provide an API key this way, it always takes precedence over CLI tokens you generate with `render login`. ### 2. Set non-interactive command options Set the following options for _all_ commands you run in non-interactive mode: | Flag | Description | |--------|--------| | `-o` / `--output` | Sets the output format. For automated environments, specify `json` or `yaml`. Also supports `text` for unstructured text output, along with the default value `interactive`. | | `--confirm` | Skips any confirmation prompts that the command would otherwise display. | For example, to list the active workspace's services in JSON format: ```shell render services --output json --confirm ``` ### Example: GitHub Actions This example action provides similar functionality to Render's [automatic Git deploys](deploys#automatic-git-deploys). You could disable auto-deploys and customize this action to trigger deploys with different conditions. To use this action, first set the following secrets in your repository: | Secret | Description | | ------------------- | -------------------------------------------------- | | `RENDER_API_KEY` | A valid Render [API key](api#1-create-an-api-key) | | `RENDER_SERVICE_ID` | The ID of the service you want to deploy | ```yaml name: Render CLI Deploy run-name: Deploying via Render CLI # Run this workflow when code is pushed to the main branch. on: push: branches: - main jobs: Deploy-Render: runs-on: ubuntu-latest steps: # Downloads the Render CLI binary and adds it to the PATH. # To prevent breaking changes in CI/CD, we pin to a # specific CLI version (in this case 1.1.0). - name: Install Render CLI run: | curl -L https://github.com/render-oss/cli/releases/download/v1.1.0/cli_1.1.0_linux_amd64.zip -o render.zip unzip render.zip sudo mv cli_v1.1.0 /usr/local/bin/render - name: Trigger deploy with Render CLI env: # The CLI can authenticate via a Render API key without logging in. RENDER_API_KEY: ${{ secrets.RENDER_API_KEY }} CI: true run: | render deploys create ${{ secrets.RENDER_SERVICE_ID }} --output json --confirm ``` ## Local config By default, the Render CLI stores its local configuration at the following path: ``` $HOME/.render/cli.yaml ``` You can change this file path by setting the `RENDER_CLI_CONFIG_PATH` environment variable. ## Managing CLI tokens For security, CLI tokens periodically expire. If you don't use the Render CLI for a while, you might need to re-authenticate with `render login`. View a list of your active CLI tokens from your [Account Settings page](https://dashboard.render.com/u/settings#render-cli-tokens) in the Render Dashboard. You can manually revoke a CLI token that you no longer need or that might be compromised. Expired and revoked tokens tokens do not appear in the list. # Render MCP Server Render's *Model Context Protocol* (*MCP*) server enables you to manage your Render infrastructure directly from compatible AI apps, such as Cursor and Claude Code: [video] Using natural language prompts, you can: - Spin up new services - Query your databases - Analyze metrics and logs ...and more! For inspiration, see some [example prompts.](#example-prompts) *What is MCP?* [*Model Context Protocol*](https://modelcontextprotocol.io/introduction) (*MCP*) is an open standard for connecting AI applications to external tools and data. An *MCP server* exposes a set of actions that AI apps can invoke to help fulfill relevant user prompts (e.g., "Find all the documents I edited yesterday"). To perform an action, an MCP server often calls an external API, then packages the result into a standardized format for the calling application. ## How it works The Render MCP server is hosted at the following URL: ``` https://mcp.render.com/mcp ``` You can configure compatible AI apps (such as [Cursor](https://docs.cursor.com/context/mcp) and [Claude Code](https://docs.anthropic.com/en/docs/claude-code/mcp)) to communicate with this server. When you provide a relevant prompt, your tool intelligently calls the MCP server to execute supported platform actions: [img] **In the example diagram above:** 1. A user prompts Cursor to "List my Render services". 2. Cursor intelligently detects that the Render MCP server supports actions relevant to the prompt. 3. Cursor directs the MCP server to execute the `list_services` "[tool](https://modelcontextprotocol.io/docs/concepts/tools)", which calls the Render API to fetch the corresponding data. > To explore the implementation of the MCP server itself, see the open-source project: ## Setup ### 1. Create an API key The MCP server uses an [API key](api#1-create-an-api-key) to authenticate with the Render platform. Create an API key from your [Account Settings page](https://dashboard.render.com/settings#api-keys): [img] > **Render API keys are broadly scoped.** They grant access to all workspaces and services your account can access. > > Before proceeding, make sure you're comfortable granting these permissions to your AI app. The Render MCP server currently supports only one potentially destructive operation: modifying an existing service's environment variables. ### 2. Configure your tool Next, we'll configure your AI app to use Render's hosted MCP server. Most compatible apps define their MCP configuration in a JSON file (such as `~/.cursor/mcp.json` for Cursor). Select the tab for your app: **Cursor** #### Cursor setup Add the following configuration to `~/.cursor/mcp.json`: ```json{3-8} { "mcpServers": { "render": { "url": "https://mcp.render.com/mcp", "headers": { "Authorization": "Bearer " } } } } ``` Replace `` with your [API key](#1-create-an-api-key). For more details, see the [Cursor MCP documentation](https://docs.cursor.com/en/context/mcp#using-mcp-json). **Claude Code** #### Claude Code setup Run the following command, substituting your [API key](#1-create-an-api-key) where indicated: ```bash claude mcp add --transport http render https://mcp.render.com/mcp --header "Authorization: Bearer " ``` You can include the `--scope` flag to specify where this MCP configuration is stored. For more details, see the [Claude Code MCP documentation](https://docs.anthropic.com/en/docs/claude-code/mcp#option-3%3A-add-a-remote-http-server). **Claude Desktop** #### Claude Desktop setup Add the configuration below to your Claude Desktop MCP settings. By default, this file is located at the following paths based on your operating system: - macOS: `~/Library/Application Support/Claude/claude_desktop_config.json` - Windows: `%APPDATA%\Claude\claude_desktop_config.json` ```json{3-14} { "mcpServers": { "render": { "command": "npx", "args": [ "mcp-remote", "https://mcp.render.com/mcp", "--header", "Authorization: Bearer ${RENDER_API_KEY}" ], "env": { "RENDER_API_KEY": "" } } } } ``` Replace `` with your [API key](#1-create-an-api-key). For more details, see the [Claude Desktop MCP documentation](https://modelcontextprotocol.io/quickstart/user). **Windsurf** #### Windsurf setup Add the following configuration to `~/.codeium/windsurf/mcp_config.json`: ```json{3-8} { "mcpServers": { "render": { "url": "https://mcp.render.com/mcp", "headers": { "Authorization": "Bearer " } } } } ``` Replace `` with your [API key](#1-create-an-api-key). For more details, see the [Windsurf MCP documentation](https://docs.windsurf.com/windsurf/cascade/mcp#mcp-config-json). **Other tools** #### Setup for other apps See the documentation for other popular AI apps: - [VS Code](https://docs.github.com/en/copilot/customizing-copilot/extending-copilot-chat-with-mcp) - [Zed](https://zed.dev/docs/ai/mcp) - [Gemini CLI](https://github.com/google-gemini/gemini-cli/blob/main/docs/cli/configuration.md) - [Crush](https://github.com/charmbracelet/crush#mcps) - [Warp](https://docs.warp.dev/knowledge-and-collaboration/mcp#adding-an-mcp-server) ### 3. Set your workspace To start using the Render MCP server, you first tell your AI app which Render workspace to operate in. This determines which resources the MCP server can access. You can set your workspace with a prompt like `Set my Render workspace to [WORKSPACE_NAME]`. [img] If you _don't_ set your workspace, your app usually directs you to specify one if you submit a prompt that uses the MCP server (such as `List my Render services`): [img] With your workspace set, you're ready to start prompting! Get started with some [example prompts](#example-prompts). ## Example prompts Your AI app can use the Render MCP server to perform a wide variety of platform actions. Here are some basic example prompts to get you started: #### Service creation
Create a new database named user-db with 5 GB storage
Deploy an example Flask web service on Render using https://github.com/render-examples/flask-hello-world
#### Data analysis
Using my Render database, tell me which items were the most frequently bought together
Query my read replica for daily signup counts for the last 30 days
#### Service metrics
What was the busiest traffic day for my service this month?
What did my service's autoscaling behavior look like yesterday?
#### Troubleshooting
Pull the most recent error-level logs for my API service
Why isn't my site at example.onrender.com working?
## Supported actions The Render MCP server provides a "[tool](https://modelcontextprotocol.io/docs/concepts/tools)" for each platform action listed below (organized by resource type). Your AI app (the "MCP host") can combine these tools however it needs to perform the tasks you describe. > For more details on all available tools, see the [project README](https://github.com/render-oss/render-mcp-server). | Resource Type | Supported Actions | |--------|--------| | **Workspaces** | - List all workspaces you have access to - Set the current workspace - Fetch details of the currently selected workspace | | **Services** | - Create a new web service or static site - Other service types are not yet supported. - List all services in the current workspace - Retrieve details about a specific service - Update all environment variables for a service | | **Deploys** | - List the deploy history for a service - Get details about a specific deploy | | **Logs** | - List logs matching provided filters - List all values for a given log label | | **Metrics** | - Fetch performance metrics for services and datastores, including: - CPU / memory usage - Instance count - Datastore connection counts - Web service response counts, segmentable by status code - Web service response times (requires a **Professional** workspace or higher) - Outbound bandwidth usage | | **Render Postgres** | - Create a new database - List all databases in the current workspace - Get details about a specific database - Run a read-only SQL query against a specific database | | **Render Key Value** | - List all Key Value instances in your Render account - Get details about a specific Key Value instance - Create a new Key Value instance | ## Running locally > **We strongly recommend using Render's [hosted MCP server](#2-configure-your-tool) instead of running it locally.** > > The hosted MCP server automatically updates with new capabilities as they're added. Run locally only if required for your use case. You can install and run the Render MCP server on your local machine as a Docker container, or by running the executable directly: **Docker image** #### Docker setup > **This method requires `docker`.** With this configuration, your AI app pulls and runs the Render MCP server as a Docker container. Add JSON with the format below to your tool's MCP configuration (substitute `` with your [API key](api#1-create-an-api-key)): ```json{3-18} { "mcpServers": { "render": { "command": "docker", "args": [ "run", "-i", "--rm", "-e", "RENDER_API_KEY", "-v", "render-mcp-server-config:/config", "ghcr.io/render-oss/render-mcp-server" ], "env": { "RENDER_API_KEY": "" } } } } ``` The `mcpServers` key above might differ for specific tools. For example, Zed uses `context_servers` and GitHub Copilot uses `servers`. Consult your tool's documentation for details. **Executable** #### Local executable setup With this configuration, your AI app runs the Render MCP server executable directly. 1. Install the MCP server executable using one of the methods described in [Local installation](#local-installation), then return here. 2. Add JSON with the format below to your tool's MCP configuration (substitute your [API key](api#1-create-an-api-key) and the path to your MCP server executable): ```json{3-8} { "mcpServers": { "render": { "command": "/path/to/render-mcp-server-executable", "env": { "RENDER_API_KEY": "" } } } } ``` The `mcpServers` key above might differ for specific tools. For example, Zed uses `context_servers` and GitHub Copilot uses `servers`. Consult your tool's documentation for details. ### Local installation > **Follow these instructions only if you're running the MCP server [locally](#running-locally) and without Docker.** > > We strongly recommend instead using Render's hosted MCP server, because it automatically updates as new capabilities are added. **View installation methods** **Install script** > **This method requires macOS or Linux.** 1. Run the following `curl` command: ```shell curl -fsSL https://raw.githubusercontent.com/render-oss/render-mcp-server/refs/heads/main/bin/install.sh | sh ``` 2. Note the full path where the install script saved the downloaded executable. The output includes a message like the following: ``` ✨ Successfully installed Render MCP Server to /Users/example/.local/bin/render-mcp-server ``` **Direct download** 1. Open the MCP server's [GitHub releases page.](https://github.com/render-oss/render-mcp-server/releases) 2. Under the most recent release, download and unzip the executable that corresponds to your system's architecture. - If a release asset isn't available for your architecture, select a different installation method. 3. Move the executable to the desired directory and note its full path. > **Note for macOS users:** > > You might need to grant a system exception to run the downloaded executable, because it's from an "unknown developer." [Learn more.](https://support.apple.com/guide/mac-help/open-a-mac-app-from-an-unknown-developer-mh40616/mac) **Build from source** > **We recommend building from source only in the following cases:** > > - No other installation method works for your system. > - You're making custom changes to the MCP server. 1. Install the [Go programming language](https://go.dev/doc/install) if you haven't already. 2. Clone the MCP server repository and build the executable: ```shell git clone https://github.com/render-oss/render-mcp-server.git cd render-mcp-server go build ``` This creates a `render-mcp-server` executable in the repo's root directory. 3. Note the full path to the newly built executable. ## Limitations The Render MCP server attempts to minimize exposing sensitive information (like connection strings) to your AI app's context. However, Render does not _guarantee_ that sensitive information will not be exposed. Exercise caution when interacting with secrets in your AI app. Note the following additional limitations: - The MCP server supports creation of the following resources: - [Web services](web-services) - [Static sites](static-sites) - [Render Postgres databases](postgresql) - [Render Key Value instances](key-value) Other service types (private services, background workers, and cron jobs) are not yet supported. - The MCP server does not support creating [free instances](free). - The MCP server does not support all configuration options when creating services. - For example, you cannot create image-backed services or set up IP allowlists. If there are options you'd like to see supported, please submit an issue on the MCP server's [GitHub repository](https://github.com/render-oss/render-mcp-server/issues). - The MCP server does not support modifying or deleting existing Render resources, with one exception: - You _can_ modify an existing service's environment variables. - To perform other modifications or deletions, use the [Render Dashboard][dboard] or [REST API](api). - The MCP server does not support triggering deploys, modifying scaling settings, or other operational service controls. # The Render API Render provides a public REST API for managing your services and other resources programmatically. The API supports almost all of the same functionality available in the [Render Dashboard][dboard]. It includes endpoints for managing: - Services and datastores - Deploys - Environment groups - Blueprints - Metrics and logs - Projects and environments - Custom domains - One-off jobs - Audit logs - Additional account settings > To request new API functionality, please [submit a feature request](https://feedback.render.com/features). ## Setup ### 1. Create an API key All Render API requests require authentication via API key. You create and manage API keys from your [Account Settings page](https://dashboard.render.com/u/settings?add-api-key) in the Render Dashboard: [img] An API key is displayed in full only when it's created: [img] > *API keys are secret credentials!* > > Don't publicly post your API key, commit it to version control, or otherwise share it with anyone outside your organization. If you believe an API key has been compromised, revoke it in the Render Dashboard and create a new one. ### 2. Make your first request To test your API key, let's make a quick `curl` request to list your services. Run the following in your terminal after replacing `{{render_api_token_goes_here}}` with your API key: ```bash curl --request GET \ --url 'https://api.render.com/v1/services?limit=20' \ --header 'Accept: application/json' \ --header 'Authorization: Bearer {{render_api_token_goes_here}}' ``` If your API key is valid, this request to the [List services](https://api-docs.render.com/reference/list-services) endpoint returns a `200` response with your service details in a JSON array. ## API reference [**Open the API reference**](https://api-docs.render.com) for a comprehensive list of supported endpoints. The reference is interactive, and it provides example usage in multiple programming languages. [img] ## OpenAPI spec The Render API is described by an OpenAPI 3.0 spec. The spec is available in JSON format at the following URL: ``` https://api-docs.render.com/openapi/6140fb3daeae351056086186 ``` You can use this spec to generate custom clients and with other tooling. > *Render's OpenAPI spec is subject to change.* > > The API itself will maintain backward compatibility, but details of the describing spec (such as names for endpoints and tags) might change over time. This might affect custom clients or other tools that rely on the spec. # Integrating Render with Datadog [Datadog](https://www.datadoghq.com/) is an observability platform for cloud-scale applications. You can integrate your Render services and databases with Datadog to enable fine-tuned metrics, monitoring, and automated alerting. Although [core Postgres metrics](postgresql-creating-connecting#dashboard) are available in the Render Dashboard, integrating with Datadog can provide more detailed metrics about the Postgres instance host environment. You can also use Datadog as a centralized location for dashboards and automated alerts. ## Getting started [Sign up for a Datadog account](https://app.datadoghq.com/signup) if you don't already have one. Then, create or retrieve an API key from your [Datadog organization settings page](https://app.datadoghq.com/organization-settings/api-keys): [img] > *Make sure to create an _API key_ for your organization.* The Datadog integration doesn't support using an _application key_ or a user-scoped API key. You can confirm that you've correctly generated an API key by calling Datadog's [validate endpoint](https://docs.datadoghq.com/api/latest/authentication/) with your key. ## Setting up Postgres monitoring Adding an API key enables a Datadog agent to run alongside your [Render Postgres](postgresql) instance and report metrics to your Datadog account. All metrics reported by the agent are native to the Datadog platform, so you aren't [billed for custom metrics](https://docs.datadoghq.com/account_management/billing/custom_metrics). > Render currently supports only the [Datadog sites](https://docs.datadoghq.com/getting_started/site/) US1, US3, US5, and EU1. Postgres monitoring is not supported with other Datadog sites (such as AP1, AP2, and US1-FED). ### For new databases While creating a Postgres database, provide your Datadog API key in the corresponding field: [img] ### For existing databases Add your Datadog API key from the *Info* tab of your database's page on the [Render Dashboard][dboard] (in the General section, click *Add Datadog API Key*). > This requires a restart of your Postgres instance, which causes brief downtime. [img] ### Available metrics Render fully supports all of the following Datadog integrations (see the linked documentation for metrics details): | Integration | Description | | ------------------------------------------------------------ | ------------------------------------------------------------- | | [Postgres](https://docs.datadoghq.com/integrations/postgres) | Metrics related to your PostgreSQL instance | | [Disk](https://docs.datadoghq.com/integrations/disk) | Metrics related to disk usage and IO for your Postgres volume | | [Network](https://docs.datadoghq.com/integrations/network) | Metrics related to TCP/IP network stats of instance | In addition, Render reports the following metrics: | Metric | Description | | ---------------------- | -------------------------------------------------------------------------- | | `system.cpu.num_cores` | The number of CPUs, as chosen by your database instance type | | `system.cpu.system` | The percentage of time the CPU spent running the kernel | | `system.cpu.user` | The percentage of time the CPU spent running user space processes | | `system.mem.free` | The amount of free RAM | | `system.mem.total` | The total amount of physical RAM, as chosen by your database instance type | | `system.mem.used` | The amount of RAM in use | ### Viewing metrics in Datadog You can view reported metrics from any Datadog dashboard or metrics explorer page. You can filter metrics by the `database-id` tag equal to your Render Postgres database ID. [img] ## Streaming service logs and metrics You can use Datadog as your observability provider for Render [log streams](log-streams) and [metrics streams](metrics-streams). This enables you to inspect logs and metrics from your Render services directly in your Datadog dashboard. > *Currently, Render only supports TCP log forwarding with TLS.* > > Check the [Datadog docs](https://docs.datadoghq.com/logs/log_collection/?tab=host) to confirm whether TCP log forwarding is supported for your site. To set these up, follow the general setup instructions in the [log stream docs](log-streams#setup) and [metrics stream docs](metrics-streams#general-setup). In both cases, you specify your Datadog API key and [Datadog site](https://docs.datadoghq.com/getting_started/site/) in the Render Dashboard. Note that both integrations require an organization-level API key, not an application key or user-scoped API key. For log streams, you specify the endpoint corresponding to your Datadog site: | Datadog Site | Endpoint | | ------------ | ---------------------------------- | | US1 | `intake.logs.datadoghq.com:10516` | | EU | `tcp-intake.logs.datadoghq.eu:443` | # QuotaGuard Static IP [QuotaGuard Static IP](https://www.quotaguard.com/quotaguard-static-ip-pricing/#QGstatic) Static IPs allow your services on Render to send outbound traffic through a load balanced pair of static IP addresses. Once set up, you can use QuotaGuard's IPs to connect to IP-restricted environments outside your Render network. > You do not need QuotaGuard to connect to your databases or private services on Render. ## Getting Started After [creating a QuotaGuard account](https://www.quotaguard.com/quotaguard-static-ip-pricing/#QGstatic), you will be redirected to a setup page with all the information needed to proxy your traffic through QuotaGuard's static IPs. Make note of the HTTP/S URLs and your static outbound IPs. [img] ## Configuring Your Application You can configure QuotaGuard the same way you would configure your app to use any HTTP proxy. [QuotaGuard provides examples](https://support.quotaguard.com/support/home) for common languages: - [QuotaGuard for Go](https://www.quotaguard.com/docs/language-platform/go/) - [QuotaGuard for Node.js](https://www.quotaguard.com/docs/language-platform/node-js/) - [QuotaGuard for PHP](https://www.quotaguard.com/docs/language-platform/php/) - [QuotaGuard for Python](https://www.quotaguard.com/docs/language-platform/python/) - [QuotaGuard for Ruby](https://www.quotaguard.com/docs/language-platform/ruby/) Most of these examples involve adding the `QUOTAGUARDSTATIC_URL` environment variable to your service in the Render Dashboard. Set the value to the *_HTTP/HTTPS URL_* from your QuotaGuard Outbound Proxy Setup page. Any requests your application makes using QuotaGuard's proxy URL will be routed through one of the IPs displayed in your QuotaGuard configuration. > Using QuotaGuard Static IPs will add an additional hop to your network requests and can affect response times for your application. Make sure to test application response times before and after enabling QuotaGuard. ## Testing Your Implementation Requests to `ip.quotaguard.com` always return the request client's IP address. Requests made to this address with your QuotaGuard proxy configuration will return one of the static IPs listed in your configuration. ```shell{outputLines:2} curl -x QUOTAGUARDSTATIC_URL ip.quotaguard.com => {"ip":"52.34.188.175"} ``` # Formspree With [Formspree](https://formspree.io/) you can accept form submissions on your static site without needing a backend. It provides a number of [integration options](https://help.formspree.io/hc/en-us/articles/360053239754-Getting-started-with-projects), all of which you can use with Render. ## Using the Formspree Dashboard To get started, create a Formspree account, [log into your dashboard](https://formspree.io/forms), and create a new *Dashboard Project*. After you create a new form in your project, you'll get an endpoint URL to use as the `action` attribute in your form. Submissions will be directed to your Formspree account. This integration option does not require configuration changes to your Render service; simply set the `action` URL in your form. [img] ## Using the Formspree CLI You can use Formspree's CLI together with the Formspree React library to programmatically create and configure forms on every deploy. Follow [Formspree's documentation](https://help.formspree.io/hc/en-us/articles/360053819114) to install their React library and CLI. To set up continuous deployment on Render, add the `FORMSPREE_DEPLOY_KEY` environment variable to your Render site and set the value to the deploy key in your [Formspree project settings](https://formspree.io/forms). Install Formspree's CLI as a dependency using npm or yarn: ```shell{outputLines:1} # with npm npm install -save @formspree/cli ``` ```shell{outputLines:1} # with yarn yarn add @formspree/cli ``` Add Formspree's deploy script to your `package.json` as shown below: ```json{12} { "name": "my-cool-site", "version": "0.1.0", "dependencies": { "@formspree/cli": "^0.9.6", "@formspree/react": "^2.2.3", "react": "^16.7.0" }, "scripts": { "start": "react-scripts start", "build": "react-scripts build", "formspree-deploy": "formspree deploy" } } ``` You can then append Formspree's deploy script to your build script or existing build command as follows: `npm install; npm run formspree-deploy`. [img] # Workspaces, Members, and Roles Everything you create on Render (services, datastores, and so on) belongs to a *workspace*. Every workspace has an associated [plan](pricing) that determines available features and how many [team members](#manage-team-members) you can invite. When you sign up for Render, we automatically create your first workspace on the free *Hobby* plan. You can [create additional workspaces](#create-a-workspace) or [change a workspace's plan](#change-a-workspaces-plan) at any time. > With an Enterprise plan, you can manage multiple workspaces and team members in a single [organization](enterprise-orgs). ## Create a workspace > *You can have up to five Hobby workspaces.* > > You can create an unlimited number of paid workspaces. 1. In the [Render Dashboard][dboard], open the workspace dropdown at the top of the left pane, then click *+ New Workspace*: [img] 2. Complete the workspace creation flow, including specifying a plan type and payment method. Then click *Create Workspace*. You're all set! Render creates your workspace and assigns you the [*Admin* role](#member-roles). You can switch between your different workspaces from the same dropdown. Now you can start creating services and other resources in your workspace. With a *Professional* workspace or higher, you can also [invite team members](#manage-team-members). ## Change a workspace's plan You can change your workspace's plan in the [Render Dashboard][dboard]: 1. Open the workspace dropdown in the top-left corner and select *Billing*. 2. Under the *Plan* section, click *Update Plan*. 3. Click the *Choose* button for your desired plan. - To switch to an *Enterprise* plan, instead please [contact sales](contact). 4. Review the details of your plan change, including any reduction in available features if you're downgrading. 5. Click *Confirm*. Your plan change takes effect immediately. You can view your workspace's current plan from the *Billing* page at any time. ## Manage team members > *You can't add team members to a *Hobby* workspace.* > > First, [upgrade your workspace](#change-a-workspaces-plan) to *Professional* or higher. Team members with the [*Admin* role](#member-roles) can manage other team members (including other admins): 1. From your workspace's home in the [Render Dashboard][dboard], click *Settings* in the left pane. 2. Scroll down to the *Team members* section. - *To add a team member,* click *+ Invite members*. Provide the member's email address and select a [role](#member-roles) from the dropdown. - *To remove a team member,* open the *•••* menu next to that member and click *Remove team member*. - *To change a team member's [role](#member-roles),* click their _current_ role and select a new one. ## Member roles Each member of a workspace has one of the following roles: | Role | Description | | *Admin* | - Has full access to the workspace's resources _and_ organizational settings (such as [member management](#manage-team-members), [billing management](https://dashboard.render.com/billing), and [secure login enforcement](login-settings#enforcing-secure-login)). - Can also designate individual project environments as [protected](projects#protected-environments). - A workspace's creator is automatically assigned this role. | | *Developer* | - Has access to the workspace's resources (services, environment groups, and so on), but _not_ organizational settings. - Access is limited for resources in a [protected environment.](projects#protected-environments) | | *Contributor* | *Requires an Organization plan or higher.* Similar to the *Developer* role, with the following additional restrictions: - Can't view sensitive fields (billing info, connection strings, environment variables, and so on). - Can't access running services via SSH or the Shell tab in the [Render Dashboard][dboard]. - Can't create, modify, or delete most resources. | | *Viewer* | *[Enterprise orgs](enterprise-orgs) only.* - Has _read-only_ access to most of the workspace's resources (services, environment groups, and so on). - Can't view service logs or sensitive fields (billing info, connection strings, environment variables, and so on). | | *Billing* | *[Enterprise orgs](enterprise-orgs) only.* - Has full access to the workspace's billing settings, including updating payment method. Can also view non-sensitive details of the workspace's resources (such as members and service names). - Learn more about the [Billing role](enterprise-orgs#the-billing-role). | Admins can reassign member roles from the *Workspace Settings* page in the [Render Dashboard][dboard]. > In an [Enterprise organization](enterprise-orgs) with multiple workspaces, org members can have a different role in each workspace. ## Role permissions > 🟢 Permitted > > ❌ Not permitted > > 🟨 Permitted with restrictions (details vary by permission) ### Workspace administration | Permission | Admin | Developer | Contributor | Viewer | Billing | |--------|--------|--------|--------|--------|--------| | *View workspace members* | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | | *Edit workspace settings* | 🟢 | ❌ | ❌ | ❌ | 🟨 Billing settings only. | | *Add/remove workspace members* | 🟢 | ❌ | ❌ | ❌ | ❌ | | *Export [audit logs](audit-logs)* | 🟢 | ❌ | ❌ | ❌ | ❌ | | *View billing details* | 🟢 | 🟢 | ❌ | ❌ | 🟢 | | *Edit payment method* | 🟢 | ❌ | ❌ | ❌ | 🟢 | | *Leave workspace* | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | | *Delete workspace* | 🟢 | ❌ | ❌ | ❌ | ❌ | ### Projects and environments | Permission | Admin | Developer | Contributor | Viewer | Billing | |--------|--------|--------|--------|--------|--------| | *View projects* | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | | *Create/modify projects* | 🟢 | 🟢 | ❌ | ❌ | ❌ | | *Delete projects* | 🟢 | 🟨 Can't delete a project with at least one [protected environment.](projects#protected-environments) | ❌ | ❌ | ❌ | | *View environments* | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | | *Create environments* | 🟢 | 🟢 | ❌ | ❌ | ❌ | | *Modify/delete environments* | 🟢 | 🟨 Non-[protected environments](projects#protected-environments) only. | ❌ | ❌ | ❌ | | *Designate an environment as [protected](projects#protected-environments) or [network-isolated](projects#blocking-cross-environment-traffic)* | 🟢 | ❌ | ❌ | ❌ | ❌ | | *Move resources into or out of an environment* | 🟢 | 🟨 Non-[protected environments](projects#protected-environments) only. | ❌ | ❌ | ❌ | | *Manage environment secrets* | 🟢 | 🟨 Non-[protected environments](projects#protected-environments) only. | ❌ | ❌ | ❌ | ### Services and datastores | Permission | Admin | Developer | Contributor | Viewer | Billing | |--------|--------|--------|--------|--------|--------| | *View services* | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | | *Create services* | 🟢 | 🟨 Non-[protected environments](projects#protected-environments) only. | ❌ | ❌ | ❌ | | *Modify service configuration* | 🟢 | 🟨 Can't perform potentially destructive modifications to services in a [protected environment.](projects#protected-environments) | ❌ | ❌ | ❌ | | *Trigger service deploys* | 🟢 | 🟢 | 🟢 | ❌ | ❌ | | *Trigger [rollbacks](rollbacks)* | 🟢 | 🟢 | 🟢 | ❌ | ❌ | | *Delete services* | 🟢 | 🟨 Non-[protected environments](projects#protected-environments) only. | ❌ | ❌ | ❌ | | *View connection strings and credentials for datastores* | 🟢 | 🟨 Non-[protected environments](projects#protected-environments) only. | ❌ | ❌ | ❌ | | *Modify access control IPs for datastores* | 🟢 | 🟨 Non-[protected environments](projects#protected-environments) only. | ❌ | ❌ | ❌ | | *Access running services via SSH or the Shell tab* | 🟢 | 🟨 Non-[protected environments](projects#protected-environments) only. | ❌ | ❌ | ❌ | ### Observability | Permission | Admin | Developer | Contributor | Viewer | Billing | |--------|--------|--------|--------|--------|--------| | *View service events* | 🟢 | 🟢 | 🟢 | 🟢 | ❌ | | *View [service metrics](service-metrics)* | 🟢 | 🟢 | 🟢 | 🟢 | ❌ | | *View [service logs](logging)* | 🟢 | 🟢 | 🟢 | ❌ | ❌ | ### Integrations | Permission | Admin | Developer | Contributor | Viewer | Billing | |--------|--------|--------|--------|--------|--------| | *Configure [notification settings](notifications)* | 🟢 | ❌ | ❌ | ❌ | ❌ | | *Create and configure [webhooks](webhooks)* | 🟢 | ❌ | ❌ | ❌ | ❌ | # Login Settings You can log in to Render with any of the following account providers: - Google - GitHub - GitLab - Bitbucket You can also log in via email and password. ## Managing login methods Go to the *Account Security* section of your [Account Settings](https://dashboard.render.com/u/settings#account-security) page: [img] Here, you can: - Update your password - Add or remove connected login methods - Add or remove connected Git deployment credentials - Render uses these credentials to access your repositories for [deploys](deploys). - Toggle two-factor authentication (2FA) > *Rules for account connections:* > > - If you connect GitHub for both login and deployment, you _must_ use the same GitHub account for both. > - The same is true for GitLab and Bitbucket. > - You _can_ use a Git provider for deployment _without_ using it for login (or vice versa). > - Multiple Render accounts _can't_ use the same provider account to _log in_. > - Multiple Render accounts _can_ use the same provider account for _deployment_. > - You _can't_ disconnect your Google account if you belong to a workspace that enforces [Google-account-based login](#google-account-login). First, leave any such workspaces. ## Enforcing secure login > *[*SAML SSO*](saml-sso) is currently in early access for Enterprise plans.* > > To request SAML SSO for your organization, please [contact us](contact). Your workspace can require its members to use any combination of the following login practices: - [Two-factor authentication (2FA)](#two-factor-authentication) - [Google account login](#google-account-login) Only workspace [*admins*](team-members#member-roles) can configure login enforcement features. ### Two-factor authentication Enforce two-factor authentication (2FA) from your *Workspace Settings* page: [img] If you enforce 2FA, your team members can't access the workspace's resources or settings until they enable 2FA for their Render account. > Team members with SSH or [API keys](api#1-create-an-api-key) can't use these keys to access workspace resources until they enable 2FA. ### Google account login Enforce Google-account-based login from your *Workspace Settings* page: [img] If you enable this feature, your team members can't access the workspace's resources or settings if they log in using any method _besides_ their Google account (such as with a username and password). Additionally, your team members can't change their Render account's associated email address. > *As of 2024-05-01*, new [API keys](api#1-create-an-api-key) must be created while signed in via Google account to access resources of a workspace that enables this feature. > > API keys created _before_ this date always have full access to workspace resources, regardless of the team member's login method at the time of creation. # Audit Logs With a Render Organization or Enterprise plan, admins can export audit logs of material events performed by team members over a specified time frame. Audit logs help you meet the requirements of various regulatory standards. ## Exporting audit logs You can export an audit log of [workspace events](#workspace-events) for an individual workspace. With an Enterprise plan, you can also export a _separate_ audit log of [enterprise events](#enterprise-events) for your org. ### API The [Render API](api) provides the following endpoints for audit logs: | Action | Endpoint | |--------|--------| | *Export workspace events* | [List workspace audit logs](https://api-docs.render.com/reference/list-owner-audit-logs) | | *Export organization events* | [List organization audit logs](https://api-docs.render.com/reference/list-organization-audit-logs) | ### Dashboard Select a tab to view instructions for each type of audit log: **Workspace events** > Only workspace admins can export these audit logs. 1. In the [Render Dashboard][dboard], navigate to your workspace's *Workspace Settings* page. 2. Scroll down to the *Compliance* section. 3. Under *Audit Logs*, select a start and end date, then click *Export as CSV*: [img] This audit log includes the event types listed under [Workspace events](#workspace-events). **Enterprise events** > Only org members with the [Enterprise Owner role](enterprise-orgs#the-enterprise-owner-role) can export these audit logs. 1. In the [Render Dashboard][dboard], navigate to your Enterprise org's *Settings* page. 2. Scroll down to the *Audit Logs* section. 3. Select a start and end date, then click *Export as CSV*. This audit log includes the event types listed under [Enterprise events](#enterprise-events). ## Availability of audit log data - Render begins retaining audit log data for your team as soon as you upgrade to an Organization or Enterprise plan. - Event data from prior to upgrading is not available. - Audit log data is available from *June 24, 2024* onward if you upgraded to an Organization or Enterprise plan before that date. - Whenever Render adds a new audit log event type, tracking for that event begins on the date of the event's introduction. ## Audit log format Audit logs are exported as a chronologically ordered CSV file with a separate row for each distinct event. The file includes the following columns: | Column | Description | | *timestamp* | The UTC timestamp when the event occurred. | | *actor* | The entity that performed the action. Depending on the type of actor, this value has one of two formats: - The email address of the *existing Render user* that performed the action (e.g., `person@example.com`) - This is the most common actor type and format. - The ID of the *deleted Render user* that performed the action (e.g., `Deleted User ID#123123123`) | | *event* | The type of event that occurred. See below for all supported [workspace events](#workspace-events) and [enterprise events](#enterprise-events). | | *status* | Indicates whether the event's associated action succeeded. One of the following values: - `success` - `error` | | *metadata* | A JSON object containing additional details about the event. The fields of this object vary depending on the event type. | ## Workspace events The event types below appear in audit logs for individual Render workspaces. ### Member management ###### `InviteToTeamEvent` A user was invited to the workspace. ###### `RemoveUserFromTeamEvent` A user was removed from the workspace. ###### `AcceptTeamInviteEvent` An invitation to join the workspace was accepted. ###### `ChangeTeamMemberRoleEvent` A workspace member's [role](team-members#member-roles) was changed. ###### `ChangeTeamAllowedLoginMethodsEvent` The workspace's set of allowed login methods was changed. Currently, a workspace can either allow all Render-supported login methods or [require login via Google account](login-settings#google-account-login). ###### `ChangeTeam2FAEnforcementEvent` [Two-factor authentication (2FA) enforcement](login-settings#two-factor-authentication) was enabled or disabled for the workspace. ###### `LoginEvent` A workspace member logged in. ###### `LogoutEvent` A workspace member logged out. ### Apps & services > Render does not log events for services belonging to a [service preview](service-previews) or [preview environment](preview-environments). ###### `CreateServerEvent` A web service, private service, background worker, or static site was created. ###### `SuspendServiceEvent` A web service, private service, background worker, or static site was suspended. ###### `ResumeServiceEvent` A previously suspended web service, private service, background worker, or static site was resumed. ###### `MaintenanceModeEnabledEvent` [Maintenance mode](maintenance-mode) was toggled for a web service. In the event's metadata, the `to` field is `true` if maintenance mode was enabled and `false` if it was disabled. ###### `MaintenanceModeURIUpdatedEvent` The URL of a web service's [maintenance mode page](maintenance-mode#response-format) was changed. ###### `UpdateServiceNameEvent` The name of a web service, private service, background worker, static site, or cron job was changed. ###### `DeleteServerEvent` A web service, private service, background worker, or static site was deleted. ###### `StartShellEvent` A service was accessed via SSH, either from the command line or from the service's Shell page in the Render Dashboard. ###### `ApplyBlueprintEvent` A new [Blueprint](infrastructure-as-code) was created and applied to the workspace. ###### `CreateCronJobEvent` A cron job was created. ###### `DeleteCronJobEvent` A cron job was deleted. ### Datastores #### General ###### `UpdateIPAllowListEvent` The IP allow list was updated for a Render Postgres or Key Value instance. #### Render Postgres > These events are logged only for _primary_ Render Postgres instances, not for [high availability standby instances](postgresql-high-availability) or [read replicas](postgresql-read-replicas). ###### `CreatePostgresEvent` A [Render Postgres](postgresql) database was created. ###### `DeletePostgresEvent` A [Render Postgres](postgresql) database was deleted. ###### `SuspendPostgresEvent` A [Render Postgres](postgresql) database was suspended. ###### `ResumePostgresEvent` A previously suspended [Render Postgres](postgresql) database was resumed. ###### `DownloadDatabaseBackupEvent` A [logical backup](postgresql-backups#logical-backups) of a [Render Postgres](postgresql) database was downloaded. ###### `ViewConnectionInfoEvent` The connection URL or password for a [Render Postgres](postgresql) database was viewed. #### Render Key Value ###### `CreateRedisEvent` A [Render Key Value](key-value) instance was created. ###### `DeleteRedisEvent` A [Render Key Value](key-value) instance was deleted. ###### `ViewConnectionInfoEvent` The connection URL or password for a [Render Key Value](key-value) instance was viewed. #### Persistent disks ###### `CreateServerDiskEvent` The [persistent disk](disks) for a web service, private service, or background worker was created. ###### `DeleteServerDiskEvent` The [persistent disk](disks) for a web service, private service, or background worker was deleted. ###### `RestoreDiskSnapshotEvent` The [persistent disk](disks) for a web service, private service, or background worker was restored to a [snapshot](disks#disk-snapshots). ### Environment variables ###### `UpdateEnvVarsEvent` One or more existing [environment variables](configure-environment-variables) were modified for a service. ###### `CreateEnvVarsEvent` One or more [environment variables](configure-environment-variables) were created for a service. ###### `DeleteEnvVarsEvent` One or more [environment variables](configure-environment-variables) were deleted for a service. ###### `ViewEnvVarValuesEvent` One or more [environment variable](configure-environment-variables) values were viewed for a service. ###### `DeleteEnvGroupEvent` An [environment group](configure-environment-variables#environment-groups) was deleted. ### Webhooks ###### `CreateWebhookEvent` A [webhook](webhooks) was created. ###### `UpdateWebhookEvent` A [webhook](webhooks) was changed. ###### `DeleteWebhookEvent` A [webhook](webhooks) was deleted. ### Metrics ###### `CreateOtelIntegrationEvent` A [metrics stream](metrics-streams) was created. ###### `DeleteOtelIntegrationEvent` A [metrics stream](metrics-streams) was deleted. ###### `UpdateOtelIntegrationEvent` A [metrics stream](metrics-streams) was changed. ### Projects & environments ###### `CreateProjectEvent` A project was created. This event is always accompanied by one [`CreateEnvironmentEvent`](#createenvironmentevent) event, because every project is created with a default environment. ###### `DeleteProjectEvent` A project was deleted. This event is always accompanied by at least one [`DeleteEnvironmentEvent`](#deleteenvironmentevent) event, because deleting a project also deletes all of its associated environments. ###### `CreateEnvironmentEvent` A project environment was created. ###### `DeleteEnvironmentEvent` A project environment was deleted. ###### `MoveEnvironmentResourceEvent` A resource (such as a service or environment group) was moved into or out of a project environment. ###### `ChangeEnvironmentProtectionEvent` [Protected access](projects#protected-environments) was enabled or disabled for a project environment. ### Compliance & documents ###### `DocumentDownloadEvent` A workspace member downloaded a document from the [Render Document Center](certifications-compliance#view-compliance-documentation). ###### `SignNDAEvent` A workspace member signed an NDA to view compliance documentation. ## Enterprise events *The event types below are specific to [Enterprise orgs](enterprise-orgs).* They pertain to SSO and other org-level configuration. These events appear _only_ in audit logs exported from your org's *Settings* page (not in audit logs exported for an individual workspace). ### Member management ###### `InviteToOrgEvent` A user was invited to the org. ###### `AcceptOrgInviteEvent` An invitation to join the org was accepted. ###### `AddOrgMemberEvent` A user was added to the org. ###### `RemoveOrgMemberEvent` A user was removed from the org. ###### `JoinTeamEvent` An org member added themselves to a workspace in the org. Enterprise Owners can add themselves to any workspace as an admin. Other org members can add themselves to [public workspaces](enterprise-orgs#per-workspace-access) only (they receive the Developer role). ###### `ChangeOrgRoleEvent` An org member's role was changed. This refers to a member's org-level role (such as [Enterprise Owner](enterprise-orgs#the-enterprise-owner-role)), not their role within a particular workspace. ###### `ChangeOrgAllowedLoginMethodsEvent` The org's set of allowed login methods was changed. ###### `ChangeOrg2FAEnforcementEvent` Two-factor authentication enforcement was enabled or disabled for the org. ### Workspace management ###### `CreateWorkspaceEvent` A workspace was created in the org. ###### `DeleteWorkspaceEvent` A workspace in the org was deleted. ###### `ChangeWorkspacePrivacyEvent` The [access setting](enterprise-orgs#per-workspace-access) for a workspace in the org was changed. ### IdP management ###### `CreateOrgDomainEvent` A domain was added to the org as part of [configuring SSO](saml-sso#sso-setup). ###### `VerifyOrgDomainEvent` Ownership of a domain was verified as part of [configuring SSO](saml-sso#sso-setup). ###### `DeleteOrgDomainEvent` A domain was removed from the org. ###### `CreateSSOConnectionEvent` An [SSO connection](saml-sso) was created. ###### `UpdateSSOConnectionEvent` An [SSO connection](saml-sso) was changed. ###### `DeleteSSOConnectionEvent` An [SSO connection](saml-sso) was deleted. ###### `ProvisionOrganizationSCIMToken` A [SCIM token](saml-sso#member-management-setup-scim) was provisioned for the org. ###### `RevokeOrganizationSCIMToken` A [SCIM token](saml-sso#member-management-setup-scim) was revoked for the org. ## History of audit log event changes | Date | Change | |--------|--------| | `2025-03-13` | Added initial set of [enterprise events](#enterprise-events). | | `2025-03-11` | Added the following workspace event types: - [`CreateOtelIntegrationEvent`](#createotelintegrationevent) - [`DeleteOtelIntegrationEvent`](#deleteotelintegrationevent) - [`UpdateOtelIntegrationEvent`](#updateotelintegrationevent) - [`CreateWebhookEvent`](#createwebhookevent) - [`UpdateWebhookEvent`](#updatewebhookevent) - [`DeleteWebhookEvent`](#deletewebhookevent) | | `2024-12-18` | Added the following workspace event types: - [`DocumentDownloadEvent`](#documentdownloadevent) - [`SignNDAEvent`](#signndaevent) | | `2024-09-24` | Added the following workspace event types: - [`MaintenanceModeEnabledEvent`](#maintenancemodeenabledevent) - [`MaintenanceModeURIUpdatedEvent`](#maintenancemodeuriupdatedevent) | | `2024-08-14` | Added the following workspace event types: - [`ViewEnvVarValuesEvent`](#viewenvvarvaluesevent) - [`ViewConnectionInfoEvent`](#viewconnectioninfoevent) (Render Postgres) - [`ViewConnectionInfoEvent`](#viewconnectioninfoevent-1) (Render Key Value) | | `2024-06-24` | Added initial set of [Workspace events](#workspace-events). | # Enterprise Organizations With a Render Enterprise plan, you can manage all of your team's users, workspaces, and services in a single *organization* (or *org*): [img] Each organization member can belong to any combination of workspaces, based on which services they need to access. You can also add [guests](#member-types) that receive access to a single workspace. You can integrate your org with your identity provider (IdP) to enable [SAML single sign-on](saml-sso) (SSO), along with member management via SCIM. ## Creating an org As part of setting up your Enterprise account, the Render team works with you directly to create your organization. If you have any existing Render workspaces, we'll help you transfer them into your org for centralized management. Each member of each transferred workspace becomes a member of your org. ## Adding workspaces ### Creating a new workspace > Only org members with the *Enterprise Owner* role can create new workspaces in the org. 1. From your organization's Workspaces page in the [Render Dashboard][dboard], click *+ New Workspace*. 2. Provide a name for the workspace. 3. Set the workspace's privacy setting to *Public* or *Invite-only*. - *Public*: Any org member can add themselves to the workspace. - *Invite-only*: Only admins of the workspace can invite other org members. 4. Click *Create Workspace*. You're all set! Render creates the workspace and adds you as its first admin. You can immediately start creating services and inviting members. ### Transferring an existing workspace As part of creating your org, the Render team helps you transfer any of your existing workspaces into it. If you later need to transfer a different workspace into your org, please [reach out to our support team](https://dashboard.render.com?contact-support) in the Render Dashboard. ## Access management ### The Enterprise Owner role During org creation, the Render team assigns at least one member of your org the *Enterprise Owner* role. Members with this role can do the following: - Manage all org-level settings, such as integrating your IdP for SSO and SCIM - Create new workspaces in the org - Add or remove the Enterprise Owner role from other org members - Add themselves to any workspace as an admin - Enterprise Owners are not _automatically_ added to any workspaces in the org. Other org members do not have an organization-level role or associated permissions. Workspace-level permissions depend on a [member's role within each workspace](team-members#member-roles). ### Member types > *This section assumes you've enabled [*SAML SSO*](saml-sso) for your org.* > > If you _haven't_ enabled SSO, workspace admins can invite any Render user to any org-managed workspace. These users automatically become standard members of the org. | Member Type | Description | |--------|--------| | *Standard member* | Any user with an email address managed by your IdP. Standard members automatically join your org the first time they log in to Render via SSO. After joining, they can then add themselves to any public workspace in the org and receive invitations to invite-only workspaces. You can optionally manage standard members via [SCIM](saml-sso#member-management-setup-scim). | | *Guest* | Any user with an email address that _isn't_ managed by your IdP (which prevents them from logging in via SSO). Invite guests to collaborate with individuals outside your company, such as consultants. Workspace admins can invite guests to individual workspaces in the org. Guests can't access any org resources _except_ those in the single workspace they're invited to. Guests are billed identically to standard members. | ### Per-workspace access Each workspace in an org has one of two privacy settings: *public* or *invite-only*. Newly added workspaces are invite-only by default. Workspace admins can change a workspace's privacy setting from the workspace's Settings page in the Render Dashboard. - Standard org members can add themselves to any public workspace (guests cannot). - Invite-only workspaces require an invitation from a workspace admin. When you add an org member to a workspace, you can assign them any [member role.](team-members#member-roles) #### The Billing role > *This role is available only for Enterprise orgs.* When you add an org member to an individual workspace, you can assign them the *Billing* role: [img] Members with this role can view and manage the workspace's billing and payment settings. They also receive _view-only_ access to non-sensitive details of the workspace's resources (such as service names). If an org member has the *Billing* role in _every_ org-managed workspace they belong to, your org is _not_ charged for their seat. See three example scenarios below: | Role in Workspace A | Role in Workspace B | Org charged for seat? | |--------|--------|--------| | Billing | Billing | No | | Billing | None (not assigned to workspace) | No | | Billing | Developer | *Yes* | Each Enterprise org supports up to two total members with the *Billing* role. # SAML Single Sign-On (SSO) > *Don't have an Enterprise plan?* > > - Non-Enterprise workspaces can enforce other [login settings](login-settings#enforcing-secure-login), such as requiring login via Google account. > - [Contact us](contact) if you're interested in upgrading to Enterprise. [Enterprise organizations](enterprise-orgs) on Render can enable single sign-on (SSO) via a SAML 2.0-compatible identity provider (IdP), such as Okta or Microsoft Entra. After setting up SSO, you can also manage provisioning and deprovisioning organization members via [SCIM](#member-management-setup-scim). *All steps described in this article must be completed by an org member with the [*Enterprise Owner*](enterprise-orgs#the-enterprise-owner-role) role.* If you need help with any of these steps, please [reach out to our support team](https://dashboard.render.com?contact-support) in the Render Dashboard. ## SSO setup ### 1. Verify domain ownership Render needs to verify that you own any domains you're configuring for SSO. To enable this, you add a TXT record with a Render-provided value to each domain's DNS configuration. 1. From your organization home in the [Render Dashboard][dboard], open the org's *Settings* page. 2. Scroll down to the *Domains* section and click *+ Add domain*: [img] 3. Click *+ Configure connection*. The following dialog appears: [img] 4. Enter the domain you want to verify and click *Next*. 5. The dialog displays the *Hostname* (`_render-domain-challenge`) and *Value* for the TXT record you'll add to your domain's DNS configuration. 6. In your DNS provider's admin console, add a new TXT record with the provided *Hostname* and *Value*. - Consult your DNS provider's documentation for instructions on adding a TXT record. > *Your DNS change might take up to 24 hours to propagate.* > > Render cannot verify your domain until the new TXT record is visible in your domain's DNS configuration. 7. After the DNS change has propagated, return to your org's Settings page and click the *Retry verification* button next to the domain you added: [img] If the verification succeeds, the domain's status updates to *Verified*. Repeat the steps above for each domain you want to configure for SSO. ### 2. Configure your IdP 1. From your org's Settings page in the Render Dashboard, scroll down to *SAML SSO* and click *+ Configure connection*: [img] The following dialog appears: [img] 2. Copy the *ACS URL* and *Audience URI* values. You'll provide these to your IdP. 3. In your IdP's admin console, create a new SAML 2.0 application for Render. - Consult your IdP's documentation for instructions on creating a new SAML application. 4. Provide your *ACS URL* and *Audience URI* values in your new SAML application's configuration. - Some IdPs might refer to *ACS URL* as *Single Sign-On URL*. - Some IdPs might refer to *Audience URI* as *Entity ID*. 5. After your application is created, obtain its *metadata URL*. 6. Back in the Render Dashboard, return to the SSO configuration dialog. Switch to the *Connect provider to Render* tab: [img] 7. Paste your IdP's metadata URL and click *Configure SAML SSO*. ### 3. Add yourself to your SAML application In your IdP's admin console, add yourself to the new SAML application you created for Render. Only the users you add to this application can log in to Render via SSO. You can add other org members to the application after you confirm that SSO login works as expected. ### 4. Test SSO login You should now be able to log in to Render via SSO with your IdP-managed email address. Your org doesn't yet _require_ SSO login (you [configure this later](#requiring-sso-login)), so you can fall back to another login method if the flow fails. 1. Log out of the Render Dashboard. 2. Go to [dashboard.render.com/login](https://dashboard.render.com/login) and click *Sign in with SSO*. 3. Provide your IdP-managed email address and complete your provider's login flow. If you fail to log in, review the your SSO configuration in the Render Dashboard and your IdP's admin console. If you need help, please [contact our support team](https://dashboard.render.com?contact-support) in the Render Dashboard. ### 5. Add other org members to your SAML application After you confirm that the SSO login flow works as expected, you can add other org members to your SAML application in your IdP's admin console. *At this point, SSO is available as an _optional_ login method for specified org members.* Next, you can _require_ SSO login for all org members. ## Requiring SSO login After you [set up SSO](#sso-setup) for your org and confirm that it works as expected, you can then _require_ all of your org members to log in via SSO. > *Requiring SSO might lock out certain org members and/or invalidate certain API keys!* > > You can view a list of affected members and API keys before confirming the change. See details below. 1. From your organization home in the [Render Dashboard][dboard], open your org's *Settings* page. 2. Under *Security*, find *Require Login Method*: [img] 3. Click *Edit*, then select *SAML SSO* as a required login method. 4. Click *Save*. A confirmation dialog like this one appears: [img] This dialog displays a report of your org members who have yet to log in via SSO, along with all API keys owned by those members. > *Review this dialog carefully before proceeding!* > > If you enforce SSO: > > - Each listed member will lose access to the org until they log in via SSO. > - Any member without an IdP-managed email address will _permanently_ lose access to the org. > - Each listed API key will be invalidated until its owner logs in via SSO. > > *This will affect any integrations that rely on the invalidated API keys.* 5. After reviewing the report, notify affected org members as needed. Direct them to log out, then log in via SSO. - This ensures that the members and their API keys do not experience any interruption in service when you enforce SSO. 6. When you're ready, return to the confirmation dialog and click *Require SAML SSO*. You're all set! With SSO enforced, all accounts managed by your IdP _must_ use SSO to log in, even to view workspaces outside your org. > *With SSO enforced, you can still invite guest members to individual workspaces in your org.* > > Because they can't log in via SSO, guests are restricted to the single workspace they're invited to. For details, see [Member types](enterprise-orgs#member-types). ## Member management setup (SCIM) All SAML-enabled Enterprise orgs support just-in-time (JIT) member provisioning. When a user logs in via SSO for the first time, Render adds them to your org as a member. You can also optionally enable member management via SCIM. This enables you to provision and deprovision IdP-managed org members from your IdP's admin console. ### 1. Generate a SCIM token 1. From your organization home in the [Render Dashboard][dboard], open the org's *Settings* page. 2. Under *Security*, find the *SCIM Provisioning* section and click *+ Create*: [img] A *SCIM Configuration* dialog appears. 3. Copy the values of *Base URL* and *Token* in the dialog. You'll provide these to your IdP. ### 2. Configure SCIM in your IdP 1. In your IdP's admin console, navigate to the SAML application you created for Render. 2. Enable SCIM provisioning for the application. - Consult your IdP's documentation for instructions on enabling SCIM provisioning. 3. Provide the following values for your application's SCIM configuration: | Field | Value | | --------------------------------- | --------------------------------------------------------------------------------------- | | SCIM version | 2.0 | | SCIM connector base URL | `https://sso.render.com/scim/v2/` | | Unique identifier field for users | `email` | | Authentication method | HTTP Header | | Bearer token | The *Token* value you copied during [SCIM token generation](#1-generate-a-scim-token) | You're all set! Your IdP syncs with Render to enable managing org members from your IdP's admin console. ## FAQ ###### Can I use SSO with a non-Enterprise plan? *No.* SSO is available only with an Enterprise plan. Please [contact us](contact) about upgrading to an Enterprise plan to enable SSO. ###### Can I use SSO with multiple identity providers? *No.* Each Render organization can connect only one IdP for SSO. ###### Does Render SSO support OIDC or other non-SAML protocols? *No.* Currently, Render SSO only supports SAML 2.0. ###### If I enable SSO, can I add guests to my org from outside my company? *Yes.* Workspace admins can add guests to individual workspaces in your org. Guests can't log in via SSO and are restricted to the single workspace they're invited to. For details, see [Member types](enterprise-orgs#member-types). ###### What happens to resources created by a SCIM-deprovisioned org member? - Deprovisioned members are immediately logged out of Render and cannot log back in. - All API keys belonging to the deprovisioned member are immediately invalidated. - Any services originally created by the member remain active and are not affected. # DDoS Protection Render provides free distributed denial-of-service protection to every application and website hosted on our platform. We're using Cloudflare’s industry-leading DDoS protection infrastructure behind the scenes, and you don't have to do anything to benefit. When your web service is deployed to Render, it is automatically protected.
[img]
Please see our [announcement blog post](blog/free-ddos-protection) to learn more about DDoS attacks and why we built this feature. ## Limitations If you use wildcard custom subdomains and your own Cloudflare account, please see our [Custom Domains documentation](configure-cloudflare-dns#adding-a-wildcard-custom-domain-without-the-base-domain) for a specific configuration that may cause traffic to be incorrectly routed. If you have any questions, you can get in touch with us at support@render.com. # Render Platform Maintenance Render routinely performs infrastructure maintenance to improve platform performance, reliability, and security. In _most_ cases, maintenance is completely transparent, with no interruption to your services. ## Service-affecting maintenance > Maintenance _never_ changes a service's configuration details or instance type. Certain types of maintenance (such as OS upgrades) do require brief downtime, specifically for services that provide persistent storage: - [Render Postgres databases](postgresql) - [Render Key Value instances](key-value) - Services with an attached [persistent disk](disks) To ensure the integrity of your data, Render spins down these service instances completely before spinning up their replacements. This process usually takes a few minutes. For Render Postgres databases with [high availability](postgresql-high-availability), this is reduced to less than one minute. *What about other services?* Services without attached storage ("stateless" services) remain available during maintenance windows, because they support [zero-downtime deploys](deploys#zero-downtime-deploys). Render can spin up new instances for these services before deprovisioning the old ones, ensuring there's always a routing destination for incoming traffic: [diagram] *For services that perform long-running tasks* (such as [background workers](background-workers)), make sure these services define logic to handle a graceful shutdown in the event of receiving a `SIGTERM` signal. Render sends this signal when spinning down an instance as part of maintenance (and as part of any zero-downtime deploy). ### Resolving maintenance deploy failures > *For assistance resolving a maintenance deploy failure:* > > - See [Troubleshooting Your Deploy](troubleshooting-deploys) for common issues and solutions. > - [Reach out to our support team](https://dashboard.render.com/?contact-support) in the Render Dashboard. As part of service-affecting maintenance, Render deploys your services to new instances before spinning down the old ones. In certain cases, this deploy might fail. This most commonly occurs if your service's most recent deploys were failing _before_ maintenance began, indicating an issue with your service's current build and deploy configuration. In the event of a deploy failure, Render keeps your old instance running (until a specified deadline) and continues routing traffic to it: [diagram] Render immediately notifies you by email if a maintenance deploy fails. This email includes a deadline for resolving the issue, after which Render will bring down the old instance. *If you do not resolve the issue before the deadline, your service will be taken offline.* ### Rescheduling maintenance > *[*Free services*](free) do not support rescheduling maintenance.* > > Render might perform maintenance on a free service at any time, without advance notice. If any of your paid services will experience downtime as part of a maintenance window, Render always provides advance notice via email _and_ in the [Render Dashboard][dboard]. Service-affecting maintenance windows usually occur no more than once every three months. Whenever possible, Render provides the ability to reschedule a paid service's maintenance window to a more convenient time (usually within a few days of the original scheduled date). You can reschedule in the [Render Dashboard][dboard] or via the [Render API](https://api-docs.render.com/reference/update-maintenance). ### Triggering maintenance Both the Render Dashboard and [API](https://api-docs.render.com/reference/trigger-maintenance) support _immediately_ triggering a service's scheduled maintenance. The following actions _also_ trigger maintenance immediately, because they already involve replacing a service's current instance with a new one: - Redeploying a service - Restarting a Render Postgres database - Suspending and resuming a Render Postgres database - Changing the instance type for any service, Render Postgres database, or Render Key Value instance # Render Platform Compliance and Certifications The Render platform is fully compliant with the following security frameworks: | Regulation / Framework | Description | |--------|--------| | *SOC 2 Type 2* | Validates an organization's security controls and their operational effectiveness via annual third-party audit. | | *ISO 27001* | Defines a global standard of requirements for information security management systems (ISMS). | In addition, Render supports [*HIPAA-enabled workspaces*](hipaa-compliance) for organizations that process and store US health data. Render also maintains its own security policy and complies with the General Data Protection Regulation (GDPR). ## View compliance documentation > *Some documents require an Organization or Enterprise workspace.* [See details.](#provided-documents) Certificates, attestations, and other security policy documents are available from the [Document Center](https://dashboard.render.com/documents) in the Render Dashboard: [img] Open the Document Center by visiting [dashboard.render.com/documents](https://dashboard.render.com/documents) or by clicking *Compliance and documents* in the top-right corner of the dashboard: [img] ### Provided documents *All workspaces* can access the following documents: - SOC 3 report - GDPR DPA *Organization and Enterprise workspaces* can access the following documents, which also require signing a non-disclosure agreement (NDA): - SOC 2 Type 2 report - ISO 27001 certificate - Render security policy # HIPAA on Render > *HIPAA-enabled workspaces require an Organization or Enterprise plan.* *HIPAA* is a United States federal law that sets standards for protecting individuals' healthcare data. It defines administrative, physical, and technical safeguards for organizations that process or store protected health information (PHI). Render provides *HIPAA-enabled workspaces* for organizations subject to HIPAA requirements. These workspaces run services and datastores on access-restricted hosts, helping to secure any PHI processed or stored by your applications. Access to these hosts by Render staff is subject to strict controls. ## Setup > *Before proceeding, review all [important considerations](#important-considerations) below.* The following steps must be completed by a workspace admin: 1. In the [Render Dashboard][dboard], open your *Workspace Settings* page and scroll down to the *Compliance* section: [img] 2. Click *Get Started*. This opens a confirmation flow to receive Render's Business Associate Agreement (BAA). 3. Review all enablement steps, best practices, and workspace details outlined in the confirmation flow. 4. After you complete the confirmation flow, Render emails you a link to sign the BAA. 5. After you sign the BAA, return to the *Compliance* section of your workspace settings. After about a minute, your HIPAA Compliance status updates to *Pending*: [img] 6. When you're ready, click *Enable HIPAA* to initiate the enablement process. - Before proceeding, review all details in the confirmation dialog that appears. - If you don't initiate the enablement process manually, Render initiates it automatically 72 hours after you sign the BAA. As part of enablement, Render redeploys all of your workspace's existing services and datastores to access-restricted hosts. Your services might become unavailable for a few minutes. Render emails you when the enablement process begins, then a second time after it completes. 7. After the process completes, your HIPAA Compliance status updates to *Enabled*: [img] Your workspace is now ready to host HIPAA-compliant applications. ## Important considerations *Before upgrading to a HIPAA-enabled workspace, note all of the following:* - Upgrading to a HIPAA-enabled workspace is an irreversible action. - An additional 20% fee applies to all usage (compute, storage, etc.) in a HIPAA-enabled workspace. The minimum monthly fee is $250. - HIPAA-enabled workspaces cannot deploy or run [free instances](free). - This is because free instances run on hosts that do not support restricted access for HIPAA compliance. - If your workspace has existing free instances, Render migrates them to the smallest paid instance type as part of the upgrade process. - Render also _suspends_ free web services migrated this way (Postgres and Key Value instances are not suspended). These services are not billed while suspended. You can resume them any time after upgrading. - For [*Enterprise* plans](enterprise-orgs), Render upgrades _one_ of your workspaces to a HIPAA-enabled workspace. - You specify which workspace to upgrade in your BAA. - Your other workspaces are _not_ HIPAA-enabled. All HIPAA-compliant workflows must run in the HIPAA-enabled workspace. - Even in a HIPAA-enabled workspace, you _must not_ include PHI in certain resources. - For details, see [Where can I process and store PHI?](#where-can-i-process-and-store-phi) - *A HIPAA-enabled workspace does not automatically make your applications HIPAA-compliant.* - You are responsible for adhering to HIPAA regulations for all applications in your workspace. - For more information, see Render's [shared responsibility model](shared-responsibility-model). ## Where can I process and store PHI? > *Never process or store PHI on Render outside of a HIPAA-enabled workspace.* Not all resources in a HIPAA-enabled workspace support HIPAA-compliant processing and storing of PHI. See the following table for details: ***Live services*** | Resource | PHI OK? | Details | |--------|--------|--------| | [Web services](web-services) | 🟢 | | | [Static sites](static-sites) | ❌ | Static sites consist of static assets hosted at a publicly accessible URL. Those assets _must not_ include any PHI. | | [Private services](private-services) | 🟢 | | | [Background workers](background-workers) | 🟢 | | | [Cron jobs](cronjobs) | 🟢 | | | Service-generated [logs](logging) | ❌ | Never include PHI in any message logged by any Render service, whether at build time or runtime. | | [Service previews](service-previews) and [preview environments](preview-environments) | 🟢 | Preview instances run on access-restricted hosts, just like their production counterparts. | ***Datastores*** | [Persistent disks](disks) | 🟢 | All disks and their daily snapshots are encrypted at rest. | | [Render Postgres](postgresql) databases | 🟢 | Your primary databases, [read replicas](postgresql-read-replicas), and [high availability](postgresql-high-availability) standby databases all support HIPAA-compliant workflows. | | [Render Key Value](key-value) instances | 🟢 | | ***Builds*** | Build artifacts | ❌ | This is the bundle generated by your service's [build command.](deploys#build-command) It includes application code, dependencies, static assets, and any other files needed to run your service. These generated files must not include PHI. | | Infrastructure-as-code config | ❌ | This includes `render.yaml` files for [Blueprints](infrastructure-as-code), along with [Terraform](terraform-provider) configuration files. | | Resource names | ❌ | Do not include PHI in the name you assign to _any_ resource, including: - Service names - Environment variable names - Secret file filenames - Table or column names in your database | # Shared Responsibility Model Render's security controls are predicated on the assumption that customers maintain robust internal controls. The effective use of the Render system relies heavily on customers actively managing their assigned security responsibilities. This includes safeguarding user data, controlling access, and upholding security protocols. This document delineates Render's specific responsibilities alongside the customer's obligations and identifies areas of shared responsibility. It aims to guide customers in establishing comprehensive security practices. Note that the procedures and controls listed here serve as foundational guidance. They are not exhaustive. Customers are encouraged to implement additional controls that align with their specific security needs and compliance requirements. ## Customer responsibilities *Customer is responsible for:* - Protecting all secrets within their organization - Managing and reviewing user access to the Render system - Implementing and enforcing data policies regarding the types of data entered into the Render system, ensuring data is transmitted securely and encrypted as necessary - Keeping informed about communications from Render that affect system security, user obligations, or service availability - Establishing and maintaining security controls for system-generated outputs and reports - Compliance with relevant legal and regulatory requirements - Security of application-level configurations - Regular security assessments of their applications - Endpoint protection of workstations used to access the Render system - Developing and maintaining their own business continuity and disaster recovery plans - Deleting their data from the Render system upon termination of services ## Render responsibilities *Render is responsible for:* - Making sure all underlying operating systems are patched and up to date - All internal networking between services and databases - Securing the infrastructure that hosts all hardware, software, networking, and facilities - Managing the underlying servers, networks, and storage - Ensuring that the runtime environments for supported programming languages and frameworks are secure and up to date - Transparency about their operational status and any security breaches - Securing the platform, which includes middleware and other integrated services that it provides - Providing detailed documentation and guidelines on the security features of their platform. ## Shared responsibilities *Customer and Render share responsibility for:* - Monitoring and responding to incidents, depending on the nature of the incident and where it occurs in the stack - Security training - Data privacy management—both parties should cooperate to ensure that data privacy is maintained according to best practices and compliance requirements. - Conducting regular reviews of their security practices # Render Penetration Testing Policy Render customers are welcome to carry out security assessments or penetration tests of their own Render-hosted services without prior approval from Render. This helps customers identify and remediate vulnerabilities in their application environments. ## Prohibited testing Direct testing of Render's core infrastructure, APIs, or other services not provisioned for the individual customer's use is strictly forbidden without explicit consent from Render. Testing of another Render user's infrastructure is not permitted without explicit consent. Engaging in any form of Denial of Service (DoS) testing against Render infrastructure including customer environments is expressly prohibited. Render provides free [DDoS protection](ddos-protection) to all hosted services, and violating this policy by attempting DoS attacks jeopardizes the security and availability of services across our platform. ## Communicating with Render If you discover a security issue within the Render product, please submit a report to our [Vulnerability Disclosure Program](https://hackerone.com/bde6ea21-8984-4f4c-89ca-55cc309228d2/embedded_submissions/new) immediately. If Render detects abusive activities related to your security testing, we will reach out to you to stop your activities. # Back Up Render Postgres to Amazon S3 In this guide we'll show you how to back up your Render Postgres instance to Amazon S3. Render continually backs up all paid Render Postgres instances to provide [point-in-time recovery](postgresql-backups). For additional control, you can create a [cron job](cronjobs) that periodically backs up your data to Amazon S3. You will need a [Render Postgres database](postgresql) and an [Amazon Web Services](https://aws.amazon.com/) (AWS) account for this guide. By following this guide, you'll be able to: 1. [Create AWS credentials](#create-aws-credentials) that will enable working with Amazon S3. 2. [Configure and create a backup Cron Job](#configure-and-create-the-backup-cron-job) for your database. 3. [Validate that the backup is working.](#validate-the-cron-job) ## Create AWS Credentials We will create credentials with AWS IAM to enable working with Amazon S3. 1. Open the AWS console and navigate to the IAM service. Open the Users view and select the `Add Users` button. [img] 2. Enter a descriptive username, such as `-render-postgres-backup-cron`. 3. For `Select AWS credential type*` select `Access key - Programmatic access`. 4. Select the `Next: Permissions` button to move to the `Set Permissions` view. [img] 5. In the `Set Permissions` view, select `Attach existing policies directly` and search for `AmazonS3FullAccess`. Check the box to select `AmazonS3FullAccess`. > It's possible to use finer-grained policies to authorize the Cron Job. We recommend Litestream's guide if you'd like to further lock down permissions. 6. Skip through the next two views with the `Next` buttons to move to the `Review` view. Confirm the details of your user. 7. Select the `Create User` button. 8. Record the access key ID (`AKIAXXXXXXXXXXXXXXXX`) and the secret access key. ## Configure and Create the Backup Cron Job 1. Fork [render-examples/postgres-s3-backups](https://github.com/render-examples/postgres-s3-backups). 2. In the `render.yaml` file, edit the `fromDatabase` name in the Cron Job's `DATABASE_URL` environment variable to be the name of your Render Postgres instance. > *Do not use PGBouncer as your `DATABASE_URL` when performing a backup.* For details, see [this GitHub issue](https://github.com/pgbouncer/pgbouncer/issues/452). 3. In the `render.yaml` file, edit the Cron Job's `region` to match the region of your database. 4. By default, the Cron Job will run the backup daily at 3 a.m. UTC. You can change the time and frequency by modifying the Cron Job's `schedule` in the `render.yaml` file. 5. Commit and push your changes. 6. On the Render Dashboard, go to [Blueprints](https://dashboard.render.com/blueprints) and click the `New Blueprint Instance` button. Select your repository (after giving Render permission to access it, if you haven’t already). Alternatively, you can click the Deploy To Render button in the Readme of the forked repo. [img] 7. Enter a descriptive `Service Group Name` such as `Backup to S3`. 8. Fill in the environment variables: | Environment Variable | Value | | ------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | *AWS_REGION* | Choose the [AWS region](https://docs.aws.amazon.com/general/latest/gr/s3.html) closest to the region of your database. For example, a Render Postgres instance in the Oregon region would use `us-west-2` for the AWS Region US West (Oregon). | | *S3_BUCKET_NAME* | Choose a globally unique name for your bucket. For example `--render-postgres-backups`. The name must follow [Bucket naming rules](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html). | | *AWS_ACCESS_KEY_ID* | Copy the `Access key ID` (`AKIAXXXXXXXXXXXXXXXX`) we saved when creating the User. | | *AWS_SECRET_ACCESS_KEY* | Copy the secret access key we saved when creating the User. | | *POSTGRES_VERSION* | Enter your database's PostgreSQL version. You can see the version when viewing your instance in the Render Dashboard. For example, `14`. | 9. Select `Apply` to create the Cron Job. ## Validate the Cron Job 1. View the newly created Cron Job and wait for the first build to finish. 2. Select the `Trigger Run` button and wait for the job to finish with a `Cron job succeeded` event. 3. Verify the backup by inspecting the contents of your S3 bucket. That's it! Your Cron Job will now periodically back up your Render Postgres instance to Amazon S3. ## Troubleshooting ### Large Databases The `aws` CLI tool requires additional configuration when uploading large files to S3. If your compressed backup file exceeds 50 GB, add an `--expected-size` flag in the the `upload_to_bucket` function in `backup.sh`. ### Credential Errors You may have an error with your IAM user if your Cron Job fails and you see an error message similar to: ``` An error occurred (SignatureDoesNotMatch) when calling the CreateBucket operation: The request signature we calculated does not match the signature you provided. Check your key and signing method. ``` Check over the [Create AWS Credentials](#create-aws-credentials) instructions. # Setting your Bun Version | Current default Bun version | |--------| | *`1.3.4`* Services created before *2025-12-08* have a different default version. [See below.](#history-of-default-bun-versions) | > *To include Bun in your service's environment, you must do _at least one_ of the following:* > > - Set your service's Bun version using one of the methods below. > - Include a `bun.lock` or `bun.lockb` file in your service's root directory. > > Otherwise, your service's environment will _not_ include Bun. *Set your service's Bun version in any of the following ways* (in descending order of precedence): 1. Set the `BUN_VERSION` environment variable for your service in the [Render Dashboard][dboard]: [img] 2. Add a file named `.bun-version` to the root of your repo. This file contains a single line with the version to use: ```:.bun-version 1.3.4 ``` You can specify either a semantic version number (such as `1.3.4`) or use `latest` to always use the most recent version of Bun with every deploy. ## History of default Bun versions If you don't set a Bun version for your service, Render's default version depends on when you originally created the service: | Service Creation Date | Default Bun Version | |---|---| | 2025-12-08 and later | `1.3.4` | | 2025-08-18 | `1.2.20` | | Before 2025-08-18 | `1.1.0` | # Connecting to MongoDB Atlas This guide walks through connecting your Render-hosted application to a database hosted on [MongoDB Atlas](https://www.mongodb.com/atlas/database). This is an alternative to hosting a containerized instance of MongoDB on Render. If you prefer to host your own MongoDB instance on Render, see [Deploy MongoDB](deploy-mongodb). For advanced usage and troubleshooting, see the [MongoDB documentation](https://www.mongodb.com/docs/) ## Create and configure a database You complete these steps in the MongoDB Atlas web interface. 1. Select one of the following deployment options for your database: - Serverless - Dedicated - Shared 2. Select AWS as the cloud provider and pick the AWS region closest to the region where your Render app is deployed. You can also set the cluster tier, cluster name, and any additional settings at this point. Click *Create Cluster*. | *Render Region* | *Database Region* | | -------------------- | ------------------------------------------------------------------------------ | | `Oregon, USA` | `Oregon (us-west-2)` | | `Ohio, USA` | Dedicated tier: `Ohio (us-east-2)`
Shared tier: `N. Virginia (us-east-1)` | | `Virginia, USA` | `Virginia (us-east-1)` | | `Frankfurt, Germany` | `Frankfurt (eu-central-1)` | | `Singapore` | `Singapore (ap-southeast-1)` | 3. Choose an authentication method. This guide assumes you are using a username and password for authentication. You could also use a [Certificate](https://www.mongodb.com/docs/manual/core/security-x.509/). 4. Create a user profile for the new database and make a note of your database username and password. You will create environment variables for these values in your Render service connecting to Atlas. 5. Update cluster connections under "Network Access". Add your Render service's [outbound IP addresses](outbound-ip-addresses). [img] 6. Under "Connection Method", select "Connect your Application". Pick the Mongo driver and version used in your Render service to create a [connection string URI](https://www.mongodb.com/docs/manual/reference/connection-string/). ## Connect to your application on Render 1. Return to the Render Dashboard and create [environment variables](configure-environment-variables) for `username` and `password` in your Render service using the database username and password your created above. - There are some characters that require special treatment. See the [MongoDB documentation on connection string formats](https://www.mongodb.com/docs/manual/reference/connection-string/#std-label-connections-standard-connection-string-format) 2. Add connection details to your code by following the steps for your app's language or framework. That's it! Your Render service should now be able to connect to your MongoDB Atlas instance. ## Further reading MongoDB supports a variety of [drivers](https://www.mongodb.com/docs/drivers/). This guide highlights some of the most useful resources for Python and Node applications. ### [Python](https://www.mongodb.com/docs/drivers/python/) The method for adding a database connection to your Python application code depends on whether your application is synchronous or asynchronous. Use [Pymongo](https://www.mongodb.com/docs/drivers/pymongo/) in your application to connect to MongoDB (for synchronous applications) - [Tutorial for completing common database operations with Pymongo](https://pymongo.readthedocs.io/en/stable/tutorial.html) Use [Motor](https://www.mongodb.com/docs/drivers/motor/) in your Python app to connect to MongoDB Atlas (for asynchronous applications using either Tornado or asyncio. - [Tutorial for completing common database operations using Motor with Tornado](https://motor.readthedocs.io/en/stable/tutorial-tornado.html) - [Tutorial for completing common database operations using Motor with asyncio](https://motor.readthedocs.io/en/stable/tutorial-asyncio.html) ### [Node](https://www.mongodb.com/docs/drivers/node/current/quick-start/) - [Guide to CRUD operations for Interacting with the Database in Node Applications](https://www.mongodb.com/docs/drivers/node/current/fundamentals/crud/#std-label-node-crud-landing) # Connect to Render Key Value with ioredis This guide walks through connecting to a [Render Key Value](key-value) instance with [ioredis](https://github.com/luin/ioredis), the popular Node.js Redis client. > We recommend using the latest version of ioredis. This guide assumes a minimum version of `5.0.0`. ## Key Value setup 1. Create a Render Key Value instance with [these steps](key-value#create-your-key-value-instance). 2. Obtain your instance's internal URL from its *Connect* menu in the [Render Dashboard][dboard]. - You can use this URL to connect to your instance from your other services in the same region. > If you want to connect to your instance from outside Render (for testing, dev environments, etc.), you also need to [enable external connections](key-value#enabling-external-connections) and add the IP addresses you want to connect from. After you do, you can view your instance's external URL. ## Configure ioredis Next, we'll provide our ioredis client with connection details for our Render Key Value instance. > We strongly recommend storing connection details in [environment variables](configure-environment-variables), such as `REDIS_URL`. ### Connecting via URL We recommend configuring ioredis by passing in the provided internal or external Key Value URL. When used in Blueprints, you can pass this URL to your services using the `fromService` syntax ([docs](blueprint-spec#environment-variables)). ```javascript const Redis = require('ioredis') const { REDIS_URL } = process.env // Internal URL example: // "redis://red-xxxxxxxxxxxxxxxxxxxx:6379" // External URL is slightly different: // "rediss://red-xxxxxxxxxxxxxxxxxxxx:PASSWORD@HOST:6379" const keyValueClient = new Redis(REDIS_URL) keyValueClient.set('animal', 'cat') keyValueClient.get('animal').then((result) => { console.log(result) // Prints "cat" }) ``` ### Setting detailed connection config To explicitly configure ioredis you can use the following examples: #### Internal connection ```javascript const Redis = require('ioredis') // Internal URL, extract the details into environment variables. // "redis://red-xxxxxxxxxxxxxxxxxxxx:6379" const keyValueClient = new Redis({ host: process.env.REDIS_SERVICE_NAME, // Render Key Value service name, red-xxxxxxxxxxxxxxxxxxxx port: process.env.REDIS_PORT || 6379, // Key Value port }) ``` #### External connection ```javascript const Redis = require('ioredis') // External Key Value URL, extract the details into environment variables. // "rediss://red-xxxxxxxxxxxxxxxxxxxx:PASSWORD@HOST:6379" const keyValueClient = new Redis({ username: process.env.REDIS_SERVICE_NAME, // Key Value name, red-xxxxxxxxxxxxxxxxxxxx host: process.env.REDIS_HOST, // Key Value hostname, REGION-kv.render.com password: process.env.REDIS_PASSWORD, // Provided password port: process.env.REDIS_PORT || 6379, // Connection port tls: true, // TLS required when externally connecting to Key Value }) ``` ## Code examples [Full examples for ioredis](https://github.com/render-examples/ioredis) of the above are available in our [Render Examples](https://github.com/render-examples) repo. # Setting Your Elixir and Erlang Versions Elixir version `1.18.4` and Erlang/OTP version `27.0` are the defaults for Render services created on or after *2025-06-12*. You can specify your service's Elixir and/or Erlang/OTP version by setting environment variables: ## Elixir Add an environment variable called `ELIXIR_VERSION` to your service and set its value to a valid version (e.g., `1.14.5`). Supported Elixir versions are [listed below](#supported-elixir-versions). If you don't _also_ specify an Erlang/OTP version, Render automatically downloads an Erlang runtime that's compatible with your chosen Elixir version. ### Supported Elixir versions *Click to show versions* - `1.19.4` - `1.19.3` - `1.19.2` - `1.19.1` - `1.19.0` - `1.18.4` - `1.18.3` - `1.18.2` - `1.18.1` - `1.18.0` - `1.17.3` - `1.17.2` - `1.17.1` - `1.17.0` - `1.16.3` - `1.16.2` - `1.16.1` - `1.16.0` - `1.15.8` ## Erlang/OTP Add an environment variable called `ERLANG_VERSION` to your app and set the value to a valid version (e.g., `24.3.4`). > If you set an Erlang/OTP version, make sure it's [compatible with your Elixir version](https://hexdocs.pm/elixir/1.16.1/compatibility-and-deprecations.html#compatibility-between-elixir-and-erlang-otp)! Supported Erlang/OTP versions are [listed below](#supported-erlangotp-versions). Note that `22.2` is a less recent version than `22.2.8`, because valid versions are based on [tags in the official Erlang repo](https://github.com/erlang/otp/tags). ### Supported Erlang/OTP versions *Click to show versions* - `27.3.4` - `27.3.3` - `27.3.2` - `27.3.1` - `27.3` - `27.2.4` - `27.2.3` - `27.2.2` - `27.2.1` - `27.2` - `27.1.3` - `27.1.2` - `27.1.1` - `27.1` - `27.0.1` - `27.0` - `26.2.5` - `26.2.4` - `26.2.3` - `26.2.2` - `26.2.1` - `26.2` - `26.1.2` - `26.1.1` - `26.1` - `26.0.2` - `26.0.1` - `26.0` - `25.3.2` - `25.3.1` - `25.3` - `25.2.3` - `25.2.2` - `25.2.1` - `25.2` - `25.1.2` - `25.1.1` - `25.1` - `25.0.4` - `25.0.3` - `25.0.2` - `25.0.1` - `25.0` - `24.3.4` - `24.3.3` - `24.3.2` - `24.3.1` - `24.3` - `24.2.2` - `24.2.1` - `24.2` - `24.1.7` - `24.1.6` - `24.1.5` - `24.1.4` - `24.1.3` - `24.1.2` - `24.1.1` - `24.1` - `24.0.6` - `24.0.5` - `24.0.4` - `24.0.3` - `24.0.2` - `24.0.1` - `24.0` - `23.3.4` - `23.3.3` - `23.3.2` - `23.3.1` - `23.3` - `23.2.7` - `23.2.6` - `23.2.5` - `23.2.4` - `23.2.3` - `23.2.2` - `23.2.1` - `23.2` - `23.1.5` - `23.1.4` - `23.1.3` - `23.1.2` - `23.1.1` - `23.1` - `23.0.4` - `23.0.3` - `23.0.2` - `23.0.1` - `23.0` - `22.3.4` - `22.3.3` - `22.3.2` - `22.3.1` - `22.3` - `22.2.8` - `22.2.7` - `22.2.6` - `22.2.5` - `22.2.4` - `22.2.3` - `22.2.2` - `22.2.1` - `22.2` - `22.1.8` - `22.1.7` - `22.1.6` - `22.1.5` - `22.1.4` - `22.1.3` - `22.1.2` - `22.1.1` - `22.1` - `22.0.7` - `22.0.6` - `22.0.5` - `22.0.4` - `22.0.3` - `22.0.2` - `22.0.1` - `22.0` - `21.3.8` - `21.3.7` - `21.3.6` - `21.3.5` - `21.3.4` - `21.3.3` - `21.3.2` - `21.3.1` - `21.3` - `21.2.7` - `21.2.6` - `21.2.5` - `21.2.4` - `21.2.3` - `21.2.2` - `21.2.1` - `21.2` - `21.1.4` - `21.1.3` - `21.1.2` - `21.1.1` - `21.1` - `21.0.9` - `21.0.8` - `21.0.7` - `21.0.6` - `21.0.5` - `21.0.4` - `21.0.3` - `21.0.2` - `21.0.1` - `21.0` - `20.3.8` - `20.3.7` - `20.3.6` - `20.3.5` - `20.3.4` - `20.3.3` - `20.3.2` - `20.3.1` - `20.3` - `20.2.4` - `20.2.3` - `20.2.2` - `20.2.1` - `20.2` - `20.1.7` - `20.1.6` - `20.1.5` - `20.1.4` - `20.1.3` - `20.1.2` - `20.1.1` - `20.1` - `20.0.5` - `20.0.4` - `20.0.3` - `20.0.2` - `20.0.1` - `20.0` - `19.3.6` - `19.3.5` - `19.3.4` - `19.3.3` - `19.3.2` - `19.3.1` - `19.3` - `19.2.3` - `19.2.2` - `19.2.1` - `19.2` - `19.1.6` - `19.1.5` - `19.1.4` - `19.1.3` - `19.1.2` - `19.1.1` - `19.1` - `19.0.7` - `19.0.6` - `19.0.5` - `19.0.4` - `19.0.3` - `19.0.2` - `19.0.1` - `19.0` - `18.3.4` - `18.3.3` - `18.3.2` - `18.3.1` - `18.3` - `18.2.4` - `18.2.3` - `18.2.2` - `18.2.1` - `18.2` - `18.1.5` - `18.1.4` - `18.1.3` - `18.1.2` - `18.1.1` - `18.1` - `18.0.3` - `18.0.2` - `18.0.1` - `18.0` - `17.5.6` - `17.5.5` - `17.5.4` - `17.5.3` - `17.5.2` - `17.5.1` - `17.5` - `17.4.1` - `17.4` - `17.3.4` - `17.3.3` - `17.3.2` - `17.3.1` - `17.3` - `17.2.2` - `17.2.1` - `17.2` - `17.1.2` - `17.1.1` - `17.1` - `17.0.2` - `17.0.1` - `17.0` ## History of default Elixir versions If you don't set an Elixir version for your service, Render's default version depends on when you originally created the service: | Service Creation Date | Default Elixir Version | |---|---| | 2025-06-12 and later | `1.18.4` | | 2024-03-05 | `1.16.1` | | 2023-11-01 | `1.15.6` | | Before 2023-11-01 | `1.9.4` | # Migrating from GitHub Pages Migrating from GitHub Pages to Render is a quick and easy process and gives you much more control over your static site builds and deploys. 1. Create a new *Static Site* on Render and select your GitHub Pages repository. 2. Use the following values during creation: - *Build Command:* `bundle exec jekyll build` - *Publish Directory:* `_site` That's it! Your site will be live on your Render URL as soon as the build finishes. Follow our [custom domains](custom-domains) guide to add your own domains to your site. ## A note on Ruby versions By default, Render uses the latest LTS version of Ruby. It can also automatically detect and install the version of Ruby specified in `.ruby-version` at the root of your project, or in your Gemfile. At the time of writing, GitHub Pages uses ruby version `2.5.3`; you can check the current dependency version on [GitHub Pages' Dependency versions page](https://pages.github.com/versions/). # Changes to Render TLS certificates issued by Let's Encrypt ## What is the change? On September 30th 2021, there will be a change in how older browsers and devices trust Let's Encrypt certificates that Render uses for [TLS on all applications and static sites](tls). This will result in a minor decrease in TLS compatibility for old clients and devices. ## Am I affected? Devices and browsers running up-to-date software will continue working normally. Let's Encrypt has taken steps to maintain support for the vast majority of older devices as well. If you run a large website, or you need to support less common software (particularly non-browser software), please review Let's Encrypt's [documentation](https://letsencrypt.org/docs/dst-root-ca-x3-expiration-september-2021/) about this change. ## What should I do? There is no action required by you. Please reach out to if you have any questions. # Setting Your Node.js Version | Current default Node.js version | |--------| | *`22.16.0`* Services created before *2025-06-12* have a different default version. [See below.](#history-of-default-nodejs-versions) | *Set a different Node.js version in _any_ of the following ways* (in descending order of precedence): 1. Set the `NODE_VERSION` environment variable for your service in the [Render Dashboard][dboard]: [img] 2. Add a file named `.node-version` to the root of your repo. This file contains a single line with the version to use: ```text 18.18.0 ``` 3. Add a file named `.nvmrc` to the root of your repo. This file uses the same format as `.node-version`. 4. Specify a Node.js version range in your `package.json` file, under the [`engines`](https://docs.npmjs.com/cli/v10/configuring-npm/package-json#engines) property: ```json "engines": { "node": ">=18.18.0 <19.0.0" } ``` If there isn't a `package.json` file in your repo's root directory, Render uses the first `package.json` file it finds in a subdirectory. > **Always include an upper bound in your version range.** > > An unbounded range (such as `>=18`) always resolves to the [`latest` release](https://nodejs.org/download/release/latest/) of Node.js, which increments its major version over time. This might result in unexpected behavior or incompatibilities with your development version. You can specify either a semantic version number (such as `18.18.0`) or an alias (such as `lts`). > Render uses the [`node-version-alias`](https://github.com/ehmicky/node-version-alias) module to resolve version aliases and [semver](https://semver.org) ranges. ## History of default Node.js versions If you don't set a Node.js version for your service, Render's default version depends on when you originally created the service: | Service Creation Date | Default Node.js Version | |---|---| | 2025-06-12 and later | `22.16.0` | | 2024-12-16 | `22.12.0` | | 2024-11-24 | `22.11.0` | | 2024-10-30 | `22.10.0` | | 2024-07-09 | `20.15.1` | | 2024-04-17 | `20.12.2` | | 2024-04-04 | `20.12.1` | | 2024-03-27 | `20.12.0` | | 2024-02-23 | `20.11.1` | | 2023-11-29 | `20.10.0` | | Before 2023-11-01 | `14.17.0` | # Enabling Okta SSO and SCIM > *These instructions are specific to Okta.* > > If you're using a different identity provider, see the primary article on [SAML SSO](saml-sso). ## Prerequisites Enabling Okta SSO and SCIM requires a Render [Enterprise org](enterprise-orgs). ## Supported features Enabling [Okta SSO](#sso) for your Render Enterprise org provides the following features: - Okta-initiated SSO - Render-initiated SSO - Just-in-time provisioning of org members with first-time login Enabling [Okta SCIM](#scim) provides the following additional features: - Provisioning and deprovisioning org members from Okta ## Configuration steps > *Before completing these steps, make sure you've [verified ownership](saml-sso#1-verify-domain-ownership) of your organization's domains.* ### SSO 1. From your org's Settings page in the Render Dashboard, scroll down to *SAML SSO* and click *+ Configure connection*: [img] The following dialog appears: [img] 2. Under *Add Render to provider*, switch to the *Okta* tab and copy your *Connection ID*. 3. Open the [Okta Integration Network App Catalog](https://www.okta.com/integrations/), then find and select the *Render* integration. 4. Click *+ Add Integration* to start configuring the integration for your Okta org. 5. In the *General Settings* tab of the configuration flow, provide your *Connection ID* in the corresponding field. 6. Provide values for other configuration fields as desired and complete the setup. 7. After your Render integration is created, navigate to its *Sign On* tab and copy its SAML 2.0 *Metadata URL*: [img] 8. Back in the Render Dashboard, return to the SSO configuration dialog. Switch to the *Connect provider to Render* tab: [img] 9. Paste the Okta metadata URL you copied and click *Configure SAML SSO*. ### SCIM #### 1. Generate a SCIM token 1. From your organization home in the [Render Dashboard][dboard], open the org's *Settings* page. 2. Under *Security*, find the *SCIM Provisioning* section and click *+ Create*: [img] A *SCIM Configuration* dialog appears. 3. Copy the values of *Base URL* and *Token* in the dialog. You'll provide these to Okta. #### 2. Configure SCIM in Okta 1. From the *Applications* page in your Okta admin console, select the *Render* integration you created during [SSO setup](#sso). 2. Switch to the *Provisioning* tab and click *Configure API Integration*: [img] 3. Check the *Enable API integration* checkbox. An *API Token* field appears: [img] 4. Provide the *Token* value you copied during [SCIM token generation](#1-generate-a-scim-token). 5. Click *Test API Credentials* to verify that your API token is valid. 6. Click *Save*. 7. Still under the *Provisioning* tab, click *To App* in the left sidebar. 8. Under *Provisioning to App*, click *Edit* and check the checkbox for each Render provisioning action you want to enable in Okta (usually all of them): [img] 9. Click *Save*. You're all set! Okta syncs with Render to enable managing org members from your Okta admin console. ## Signing in with Okta 1. From the [Render login page](https://dashboard.render.com/login), click *Sign in with SSO*: [img] Note that the login page automatically redirects to the Render Dashboard if you're already signed in with another method. First sign out of Render, then try again. 2. Provide your Okta-managed email address and click *Sign in with SSO*: [img] 3. Render redirects you to your Okta sign-on flow. 4. After you successfully authenticate to Okta, you're redirected back to the Render Dashboard. ## Troubleshoot If you have any issues setting up Okta SSO or SCIM for your org, please [reach out to the Render support team](https://dashboard.render.com?contact-support) in the Render Dashboard. # Setting Your Poetry Version | Current default Poetry version | |--------| | *`2.1.3`* Services created before *2025-06-12* have a different default version. [See below.](#history-of-default-poetry-versions) | [Poetry](https://pypi.org/project/poetry/) is a packaging and dependency manager for Python. It's automatically included in Render's native Python runtime. To specify a Poetry version, set your service's `POETRY_VERSION` [environment variable](configure-environment-variables) to any version number that's compatible with your [Python version](python-version). ## History of default Poetry versions If you don't set a Poetry version for your service, Render's default version depends on when you originally created the service: | Service Creation Date | Default Poetry Version | |---|---| | 2025-06-12 and later | `2.1.3` | | 2023-11-30 | `1.7.1` | | 2021-04-27 | `1.1.x (various)` | | Before 2021-04-27 | `1.0.x (various)` | # Setting Your Python Version > *Issues deploying your Python app?* See [Troubleshooting Python Deploys](troubleshooting-python-deploys). *Set a different Python version in _any_ of the following ways* (in descending order of precedence): 1. Set your service's `PYTHON_VERSION` [environment variable](configure-environment-variables) to a _fully qualified_ Python version (e.g., `3.13.5`). You can specify any released version from `3.7.3` onward. [img] You _must_ specify a fully qualified version (e.g., `3.13.5`) if you use this method. 2. Add a file named `.python-version` to the root of your repo. This file contains a single line with the version to use: ```text 3.13.5 ``` You _can_ omit the patch version (e.g., `3.13`) if you use this method. If you omit it, Render uses the latest corresponding patch version. Render doesn't support unreleased Python versions natively, but you can use them via [Render's Docker support](docker). ## History of default Python versions If you don't set a Python version for your service, Render's default version depends on when you originally created the service: | Service Creation Date | Default Python Version | |---|---| | 2025-06-12 and later | `3.13.4` | | 2024-12-16 | `3.11.11` | | 2024-10-29 | `3.11.10` | | 2024-04-04 | `3.11.9` | | 2024-02-22 | `3.11.8` | | 2024-01-02 | `3.11.7` | | 2023-12-04 | `3.11.6` | | Before 2023-11-01 | `3.7.10` | # Rails caching with Redis This guide will show how you can set up caching using [Redis](https://redis.io/) with an existing [Rails](https://rubyonrails.org/) app on Render. [Caching]() is a technique that allows you to reuse a previous result from a computation or call to speed up the latency of most uses. For example, if you're making requests against an external API and you can tolerate stale results, you can cache the result - reducing the amount of time to retrieve the data from hundreds of milliseconds to just a couple of milliseconds. Rails gives you the ability to use [different cache stores](https://guides.rubyonrails.org/caching_with_rails.html#cache-stores). While `MemoryStore` and `FileStore` can be sufficient in many use cases, using an external component for your cache like Redis has a couple of advantages: - Redis allows all you to share data between processes and instances. This is especially handy when you're using a threaded server like Puma or if you have multiple server instances. - Redis is more persistent. On new deploys, new instances of your web service will be able to access cached results from your old instances. The rest of the guide assumes you already have an existing Rails app. Follow our [Rails quickstart](deploy-rails-8) to create one if you don't. ## Deploy to Render We will first deploy a Redis instance on Render that is connected to your Rails app. There are 2 ways to deploy, either by [declaring your services within your repository](infrastructure-as-code) in a `render.yaml` file, or by manually setting up your services using the dashboard. ### Use `render.yaml` to deploy In your existing render.yaml , add the following: ```yaml{2-6,19-23} services: - type: redis name: cache ipAllowList: [] # only allow internal connections plan: free # optional (defaults to starter) maxmemoryPolicy: allkeys-lfu # optional (defaults to allkeys-lru). Rails recommends allkeys-lfu as a default. - type: web # this example Rails service comes from https://render.com/docs/deploy-rails name: mysite runtime: ruby buildCommand: "./bin/render-build.sh" startCommand: "bundle exec puma -C config/puma.rb" envVars: - key: DATABASE_URL fromDatabase: name: mysite property: connectionString - key: RAILS_MASTER_KEY sync: false - key: REDIS_URL # this must match the name of the environment variable used in production.rb fromService: type: redis name: cache property: connectionString ``` This will create a new free Redis instance called `cache` with a `maxmemory-policy` set to `allkeys-lfu`. It also provides the connection string of the Redis instance to the Rails app as an environment variable called `REDIS_URL`. ### Deploy Manually If you don't want to deploy your Rails app through a Blueprint, you can follow these steps for a manual deploy. 1. Create a new [Redis instance](key-value#create-your-key-value-instance) on Render. Note your Redis **Internal Redis URL**; you will need it later. 2. Navigate to your Rails app service page. Select the `Environment` tab. Add the following environment variable: | Key | Value | | ----------- | ---------------------------------------------------------- | | `REDIS_URL` | The **Internal Redis URL** for the Redis you created above | ## Add `RedisCacheStore` to your Rails app Now that you've deployed a Redis instance that is connected to your Rails app, we will configure the cache store used in production to be `RedisCacheStore` with the correct connection url. 1. Add the following lines to your Gemfile. You may already have the redis gem installed. ```ruby gem 'redis' # Use hiredis to get better performance than the "redis" gem gem 'hiredis' ``` 2. Edit the following lines in config/environments/production.rb: ```ruby{2-4} # Use a different cache store in production. config.cache_store = :redis_cache_store, { url: ENV['REDIS_URL'] } ``` 3. Commit all changes and push them to your GitHub repository. That's it! Render will redeploy the Rails app so that Rails will use the Redis instance created as its cache store. The official [Caching with Rails](https://guides.rubyonrails.org/caching_with_rails.html) guide is a great resource to check out next to figure out how to make the most use of your new Redis cache! # Static Site Redirects and Rewrites You can add *redirect and rewrite rules* to [static sites](static-sites) in the [Render Dashboard][dboard]: [img]
_These two rules are used by this very documentation site!_
When the path of an incoming request matches a rule's *Source*, Render automatically redirects or rewrites the request to the corresponding *Destination*. For details, see [Rule matching and ordering](#rule-matching-and-ordering). > *You can't apply redirect/rewrite rules to your domain root.* Each *Source* requires at least one URL path component (such as `/blog`, or even `/`). ## Which action to use? Set each rule's Action to *Redirect* or *Rewrite* according to your needs: | Action | Description | |--------|--------| | *Redirect* | Instructs the browser (or any other client) to *switch URLs* to the rule's destination via a `301 Moved Permanently` response code. Create a redirect rule if you're moving an existing resource from one path to another (for example, if you move your site's documentation content from `/documentation` to `/docs`). | | *Rewrite* | *Does not redirect the browser.* Instead, your site serves the content from the rule's destination at the original path. The browser can't detect that content was served from a different path or URL. Create a rewrite rule if: - You want to serve the same content from multiple paths. - Your static site uses a framework with [client-side routing](https://facebook.github.io/create-react-app/docs/deployment#serving-apps-with-client-side-routing) (such as [react-router](https://github.com/ReactTraining/react-router) or [Vue Router](https://router.vuejs.org/)), and you'll handle all requests from a single path like `/index.html`. | ## Rule matching and ordering *Render does not apply redirect or rewrite rules to a path if a resource exists at that path.* Instead, Render simply serves the resource at that path. This protects against overwriting valid paths with a rule, especially when using [wildcards](#wildcards). Here's what the full path-matching process looks like: [diagram] If this process results in a redirect to another site path, the process repeats with the new path. ## Rule syntax - *Source* must be a path (not a full URL). This is matched against the path of the incoming request. - *Destination* can be either a path or a full, publicly accessible URL. ### Basic examples | Source | Destination | | ------------------ | -------------------- | | `/home` | `/` | | `/blog/index.html` | `/blog` | | `/web-host` | `https://render.com` | ### Wildcards Use a *wildcard* (`*`) to match arbitrary strings in a path. - In *Source*, `*` matches _any_ string that appears starting at that position in the path. - Specify `/*` to match _all_ paths. - In *Destination*, `*` applies the _entire string_ captured by the wildcard in *Source*. | Source | Destination | Example Effect | | ------ | ------------- | ----------------------------------------- | | `/*` | `/blog/*` | `/path1/path2` → `/blog/path1/path2` | | `/*` | `/index.html` | All requests → `/index.html` | ### Placeholders Use *placeholders* to include specific path components from *Source* in *Destination*: | Source | Destination | Example Effect | | ----------------------- | ------------------------- | ---------------------------------------------- | | `/blog/posts/:postid` | `/blog/:postid` | `/blog/posts/my-post` → `/blog/my-post` | | `/updates/:month/:year` | `/changelog/:year/:month` | `/updates/03/2024` → `/changelog/2024/03` | # Setting Your Ruby Version *Set a different Ruby version in _any_ of the following ways* (in descending order of precedence): 1. Include a `Gemfile.lock` or a `gems.locked` file in the root of your repo that specifies the version to use under `RUBY VERSION`: ```ruby # Gemfile.lock RUBY VERSION ruby 3.3.0 ``` You can add this entry to an existing `Gemfile.lock` file or update its value by running: ```shell bundle update --ruby ``` 2. Add a file named `.ruby-version` to the root of your repo. This file contains a single line with the version to use: ```text 3.3.0 ``` 3. Add a file named `.tool-versions` to the root of your repo. This file can specify versions for multiple languages. To set the Ruby version, add a line like the following: ```text ruby 3.3.0 ``` 4. Set the `ruby` directive in your `Gemfile`. To avoid version mismatches across environments, you can set your Ruby version in the `.ruby-version` file, then read the value from that file in your `Gemfile`: ```ruby # Gemfile ruby file: ".ruby-version" ``` ## History of default Ruby versions If you don't set a Ruby version for your service, Render's default version depends on when you originally created the service: | Service Creation Date | Default Ruby Version | |---|---| | 2025-06-12 and later | `3.4.4` | | 2024-11-24 | `3.3.6` | | 2024-09-05 | `3.3.5` | | 2024-07-11 | `3.3.4` | | 2024-06-13 | `3.3.3` | | 2024-06-03 | `3.3.2` | | 2024-04-23 | `3.3.1` | | 2024-03-18 | `3.3.0` | | 2023-11-01 | `3.2.2` | | Before 2023-11-01 | `2.6.8` | # Specifying a Rust Toolchain By default, Render uses the latest stable Rust toolchain, but you can specify a different toolchain by adding a [file called `rust-toolchain`](https://rust-lang.github.io/rustup/overrides.html?#the-toolchain-file) at the root of your repo. It should contain a single line specifying the version. For example: ```text nightly-2020-03-15 ``` or ```text beta ``` You can also use the `RUSTUP_TOOLCHAIN` environment variable and set the value to a valid version. The environment variables overrides the version in toolchain files. > If you override the toolchain in your build command with cargo +nightly... the specified toolchain must already be installed. You can install new toolchains using `rustup` as part of your build command. Learn more about [Rust toolchains](https://rust-lang.github.io/rustup/concepts/toolchains.html). # HTTP Headers for Static Sites Since static sites don't have a server-side component that can inject custom [HTTP headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers) in responses, Render lets you define response headers for your static sites in your [dashboard][dboard]. ## Header Syntax The *header path* must be a relative path without the domain. It will be matched with all custom domains attached to your site. You can use *wildcards* to match arbitrary request paths. | Path | Effect | | ----------- | ---------------------------------------------------------------------------- | | `/*` | Matches all request paths. | | `/blog/*` | Matches `/blog/`, `/blog/latest-post/`, and all other paths under `/blog/` | | `/**/*` | Matches `/blog/`, `/assets/`, and all other paths with at least two slashes. | | `/*.css` | Matches `/tokens.css` and `/mode.css`, but not `/assets/theme.css` | | `/**/*.css` | Matches `/assets/theme.css` but not `/tokens.css` | The *name* is the *case-insensitive* name for the header. Examples include: - `Cache-Control` - `X-Frame-Options` - `Referrer-Policy` The *value* of the header is sent as-is in the response. Examples include: - `public, max-age=86400` - `DENY` - `same-origin` The header key is normalized and the value is appended to it to form the response: - `cache-control: public, max-age=86400` - `x-frame-options: DENY` - `referrer-policy: same-origin` # Deploy an AI Chatbot with LangChain and MongoDB > This tutorial is featured by [MongoDB](https://www.mongodb.com/), a Render partner. Deploy an AI chatbot that uses Retrieval-Augmented Generation (RAG) powered by data from PDFs you upload. You'll follow steps to: 1. Create a Render web service 2. Connect the Render web service to a MongoDB Atlas instance 3. Enable vector search by adding a vector index to your Atlas instance For general guidance on connecting your Render services to MongoDB Atlas, see [this article](connect-to-mongodb-atlas). # Setting Your uv Version | Current default uv version | |--------| | *`0.7.12`* | [uv](https://docs.astral.sh/uv/) is a packaging and dependency manager for Python. Render automatically adds uv to your Python service's runtime if your project's root directory includes a `uv.lock` file. To specify a uv version, set your service's `UV_VERSION` [environment variable](configure-environment-variables) to any version number that's compatible with your [Python version](python-version). ## History of default uv versions | Service Creation Date | Default uv Version | |---|---| | 2025-06-12 and later | `0.7.12` |