Use DIN in Your N8N Workflow

This guide demonstrates how to integrate NeurochainAI’s Inference API with n8n, a popular no-code platform for building automated workflows.

Integrating NeurochainAI with n8n enables powerful automation workflows, allowing you to connect NeurochainAI’s AI capabilities to a wide range of applications and services. With NeurochainAI’s inference models and data processing capabilities, you can set up workflows that automate tasks like sentiment analysis, data collection, and more.

Note: Before starting, make sure to review n8n’s Terms of Service and license requirements. n8n has specific licensing terms, especially if you're planning to use it in a commercial environment. Visit n8n.io to learn more about their licensing and terms of use.


Prerequisites

  1. n8n Installation: Ensure that n8n is installed and running on your server or local machine. You can refer to the n8n installation guide if you haven’t set it up yet.

  2. NeurochainAI API Key: You’ll need an API key from NeurochainAI to access its inference models and other services. Generate it from the NeurochainAI dashboard.


Ready-to-use Nodes:

REST API - HTTP REQUEST NODE

Step 1: Copy and Import the Node into n8n

The following JSON code defines an HTTP Request node configured to interact with NeurochainAI’s REST API. Simply copy the code below and import it directly into n8n. The node is set up to send a POST request to the NeurochainAI API endpoint with the necessary headers and parameters.

{
  "meta": {
    "instanceId": "4c46e6df4f8238d069a9ce5e134970fa27badb1d20df5146d30fe3036aafb1c1"
  },
  "nodes": [
    {
      "parameters": {
        "method": "POST",
        "url": "https://ncmb.neurochain.io/tasks/message",
        "sendHeaders": true,
        "headerParameters": {
          "parameters": [
            {
              "name": "Authorization",
              "value": "=Bearer YOUR-API-KEY-HERE"
            },
            {
              "name": "Content-Type",
              "value": "application/json"
            }
          ]
        },
        "sendBody": true,
        "specifyBody": "json",
        "jsonBody": "={\n  \"model\": \"Mistral-7B-OpenOrca-GPTQ\",\n  \"prompt\": \"YOUR-PROMPT-HERE\",\n  \"max_tokens\": 1024,\n  \"temperature\": 0.6,\n  \"top_p\": 0.95,\n  \"frequency_penalty\": 0,\n  \"presence_penalty\": 1.1\n}",
        "options": {}
      },
      "id": "703a0a39-c66b-4607-af50-e5584e7f829c",
      "name": "HTTP Request - NeurochainAI",
      "type": "n8n-nodes-base.httpRequest",
      "typeVersion": 4.2,
      "position": [
        600,
        560
      ],
      "alwaysOutputData": false,
      "onError": "continueErrorOutput"
    }
  ],
  "connections": {},
  "pinData": {}
}

Step 2: Configure the Node

  1. Import the Node: In your n8n dashboard, go to Workflows and select Import. Paste the JSON code above into the import dialog to add this node to your workflow.

  2. Replace Placeholders:

    • YOUR-API-KEY-HERE: Replace this with your actual NeurochainAI API key.

    • YOUR-PROMPT-HERE: Replace this with the prompt or text input you want to send to the NeurochainAI model.

  3. Adjust Parameters (Optional):

    • model: Choose the model you want to use. For example, Mistral-7B-OpenOrca-GPTQ.

    • max_tokens: Set the maximum number of tokens for the response.

    • temperature: Adjust this to control response creativity (higher for more diverse outputs, lower for more focused).

    • top_p, frequency_penalty, and presence_penalty: Adjust these parameters to fine-tune the response.


How It Works

  • Authorization: The node includes an Authorization header with the bearer token (your API key) to authenticate with NeurochainAI’s API.

  • Content-Type: Set to application/json for JSON data transfer.

  • Request Body: The request body contains the model, prompt, and other parameters needed to interact with NeurochainAI’s AI inference model.


Step 3: Test the Node

  1. Execute the Node: Run the HTTP Request node to send the prompt to NeurochainAI’s API and receive the inference result.

  2. View the Output: The node’s output should display the response from the AI model, containing the generated text or inference result based on the provided prompt.


Example Use Case

This setup can be incorporated into larger workflows, such as:

  • Automated Email Response Generation: Connect the output of this node to an email node to send AI-generated responses automatically.

  • Social Media Monitoring: Use the prompt to analyze text from social media feeds and store insights in a database.

  • Data Processing Pipelines: Integrate with other nodes like Google Sheets or database nodes for automated data processing and storage.

FLUX IMAGE API - HTTP REQUEST NODE

Step 1: Copy and Import the Node into n8n

The following JSON code defines an HTTP Request node for interacting with NeurochainAI’s Flux model. Copy this code and import it into your n8n workflow to set up the integration.

{
  "meta": {
    "instanceId": "4c46e6df4f8238d069a9ce5e134970fa27badb1d20df5146d30fe3036aafb1c1"
  },
  "nodes": [
    {
      "parameters": {
        "method": "POST",
        "url": "https://ncmb.neurochain.io/tasks/tti",
        "sendHeaders": true,
        "headerParameters": {
          "parameters": [
            {
              "name": "Authorization",
              "value": "=Bearer YOUR-API-KEY-HERE"
            },
            {
              "name": "Content-Type",
              "value": "application/json"
            }
          ]
        },
        "sendBody": true,
        "specifyBody": "json",
        "jsonBody": "={\n  \"model\": \"flux1-schnell-gguf\",\n  \"prompt\": \"YOUR-PROMPT-HERE\",\n  \"size\": \"1024x1024\",\n  \"quality\": \"standard\",\n  \"n\": 1,\n  \"seed\": 1\n}",
        "options": {}
      },
      "id": "c958ab33-5d80-4066-b920-c36d325edf23",
      "name": "HTTP Request - Flux Model",
      "type": "n8n-nodes-base.httpRequest",
      "typeVersion": 4.2,
      "position": [
        1360,
        1000
      ],
      "alwaysOutputData": false,
      "onError": "continueErrorOutput"
    }
  ],
  "connections": {},
  "pinData": {}
}

Step 2: Configure the Node

  1. Import the Node: In your n8n dashboard, go to Workflows and select Import. Paste the JSON code above into the import dialog to add the HTTP Request node to your workflow.

  2. Replace Placeholders:

    • YOUR-API-KEY-HERE: Replace this placeholder with your actual NeurochainAI API key.

    • YOUR-PROMPT-HERE: Replace this placeholder with the text prompt you want to send to the Flux model.

  3. Adjust Parameters (Optional):

    • size: Specify the desired output image size (e.g., "1024x1024").

    • quality: Set the quality of the output (e.g., "standard").

    • n: Specify the number of images to generate.

    • seed: Set the random seed for reproducibility in generated images.


How It Works

  • Authorization: The node includes an Authorization header with a bearer token (your API key) to authenticate with the NeurochainAI API.

  • Content-Type: Set to application/json to ensure proper JSON data formatting.

  • Request Body: The JSON body contains the model name, prompt, and other parameters required to interact with the Flux model for image generation.


Step 3: Test the Node

  1. Execute the Node: Run the HTTP Request node to send your prompt to the Flux model and generate an image based on the parameters provided.

  2. View the Output: The output of the node should display the generated image or the URL to access it, depending on the API’s response format.


Example Use Case

This integration can be used in various workflows, such as:

  • Automated Image Generation: Automatically generate images based on predefined prompts or dynamic data inputs.

  • Creative Content Generation: Use prompts based on trending topics or specific themes to create visual content for social media.

  • Custom Design Pipelines: Integrate with other nodes to store generated images in cloud storage or send them via email or messaging services.

Ready-to-use Workflows:

Telegram Flux Image Generator Bot

This workflow integrates NeurochainAI’s Flux Image API with a Telegram bot, enabling users to generate images directly within Telegram. By simply sending the /flux command followed by a prompt, the bot will process the request and return an AI-generated image in response. Perfect for quick and interactive image creation!

Steps to Set Up

  1. Add Telegram Credentials: Insert your Telegram Bot API Key in the Telegram Trigger node.

    • To get this key, contact @BotFather on Telegram, issue the /newbot command, and follow the instructions.

  2. Add NeurochainAI API Key: In the HTTP Request node, replace "YOUR-API-KEY-HERE" with your NeurochainAI API Key.

  3. Test the Workflow: Send a message in Telegram with /flux followed by your prompt, and the bot will generate an image.

That's it! Your bot is now ready to create images with NeurochainAI’s Flux model.

AI-Powered Telegram Messaging Bot

This workflow connects a Telegram bot with NeurochainAI’s inference API, enabling real-time, AI-driven responses directly in chat. Leveraging NeurochainAI's advanced models, the bot generates intelligent, context-aware replies to user messages. Simply message the bot to receive AI-powered insights, making it perfect for interactive customer support, virtual assistance, or just exploring AI responses in conversation.

Steps to Set Up
  1. Add Telegram API Key: Insert your Telegram Bot API Key in the Telegram Trigger node.

    • To obtain the API key, contact @BotFather on Telegram, use /newbot, and follow the steps.

  2. Add NeurochainAI API Key: In the HTTP Request node, replace "YOUR-API-KEY-HERE" with your NeurochainAI Rest API Key.

  3. Select Chat Mode:

    • Direct Messages: Leave the Switch node set to DM to use the bot in private messages.

    • Group Mode: Switch the Switch node to Group to allow the bot to respond in group chats.

      • In Group Mode, add the bot’s @username to the Switch node to ensure it responds only when mentioned.


Your AI Messaging Bot is now ready to respond in real-time on Telegram using NeurochainAI’s inference models!

NeurochainAI is not affiliated with n8n. This guide is provided solely to assist users in leveraging NeurochainAI’s Inference API within no-code platforms like n8n. Users are responsible for adhering to n8n’s terms of use and licensing requirements when implementing these integrations. Please review and comply with all applicable n8n policies to ensure proper and lawful usage.

Last updated