To add a
robots.txt
file to a Next.js application, follow these simple steps:
1. Create a new file called
robots.txt
in the root directory of your Next.js application.
2. Open the
robots.txt
file in a text editor and define your desired rules for search engine crawlers. For example, to allow all crawlers full access to your website, you can use the following content:
3. Save the
robots.txt
file.
Next.js automatically handles static file serving, including the
robots.txt
file. However, by default, Next.js does not provide routing for static files placed in the root directory. To ensure that Next.js serves the
robots.txt
file correctly, you need to make a minor configuration change.
4. Open the
next.config.js
file located in the root directory of your Next.js application. If the file doesn't exist, create it.
5. Inside the
next.config.js
file, add the following code to configure Next.js to serve the
robots.txt
file:
JavaScript:
module.exports = {
async rewrites() {
return [
{
source: '/robots.txt',
destination: '/api/robots',
},
];
},
};
6. Save the
next.config.js
file.
The code above configures Next.js to rewrite requests for
/robots.txt
to the
/api/robots
endpoint.
7. Create a new file called
robots.js
inside the
pages/api
directory of your Next.js application.
8. Open the
robots.js
file in a text editor and add the following code:
JavaScript:
export default function handler(req, res) {
res.status(200).sendFile('robots.txt', { root: '.' });
}
9. Save the
robots.js
file.
The code above creates an API route that serves the
robots.txt
file from the root directory of your Next.js application.
10. Start or restart your Next.js application.
After following these steps, you should be able to access the
robots.txt
file by visiting
/robots.txt
in your Next.js application, and search engine crawlers will be able to retrieve and follow the rules specified in the file.