Dokumentation (english)

Data Integrations

Connect to external databases, APIs, and file storage systems. Synchronize data automatically or import as a one-time operation.

Data Source: Integrations

Connect to external databases, APIs, and file storage systems. Data can be synchronized automatically or imported as a one-time operation.

There is the off the shelf connectors to external platforms, and there is client libraries that can be used to collect live data (eg. from sensors).

Available Connectors

We support a wide range of data sources with automatic synchronization capabilities. Click on any connector below to see detailed setup instructions:

General Purpose

  • HTTP - Download data from any publicly accessible URL. Perfect for raw files, APIs, and web resources.

Databases & Storage

  • MySQL - Connect MySQL databases directly to AI pipelines with automatic synchronization.
  • PostgreSQL - Use Postgres data for AI pipelines with automatic synchronization.
  • Amazon S3 - Connect S3 buckets for seamless data integration and file synchronization.
  • Airtable - Use Airtable data for AI pipelines with automatic synchronization.

Code & Datasets

  • GitHub - Connect repositories with automatic synchronization. Sync code, documentation, and repository data.
  • Kaggle - Download datasets and competition data from Kaggle directly to your pipelines.
  • Hugging Face - Access thousands of datasets from Hugging Face Hub for AI and ML projects.

Quick Start

To set up any connector:

  1. Navigate to Data Integrations in your flow
  2. Select an Integration from the available connectors
  3. Configure Connection with your credentials and settings
  4. Create Connection to start syncing data
  5. Monitor Sync Status in Data Synchronization
  6. Access Your Data in File Manager

Each connector has specific requirements and setup steps. Click on a connector above to see detailed instructions.

Data Sources

You can load data from different sources easily:

  1. Upload manually in the file browser (use the file manager tab in your flow)
  2. Use an integration to load data from another tool (see connector tab in your flow)
  3. Use sync jobs to continually synchronize data from integrations (see tab in your flow)

More technical methods:

  1. Use python code to write to a file on aicuflow
  2. Stream sensor data using the aicuflow C++ SDK
  3. Use the public api to send data via HTTP
  4. (experimental:) Use the custom code node to generate data
  5. Possibly sooner than you think: scraping

Responsible Developers: Julia, Maxim, Finn, previously Shivam.


Command Palette

Search for a command to run...

Schnellzugriffe
STRG + KSuche
STRG + DNachtmodus / Tagmodus
STRG + LSprache ändern

Software-Details
Kompiliert vor 1 Tag
Release: v4.0.0-production
Buildnummer: master@64a3463
Historie: 68 Items