MySQL Connector
Connect your MySQL database directly to AI pipelines and data flows with automatic synchronization.
Connect your MySQL database directly to AI pipelines and data flows.
Setup Instructions
1. Navigate to Data Integrations
Go to the Data Integrations tab in your flow.
2. Select MySQL Integration
Click Select an Integration, type MySQL in the search, and click Connect.
3. Configure Connection Settings
Fill in the following database connection details:
- Connector Name: Give your connector a descriptive name
- Host: The hostname or IP address of your MySQL server
- For local setup:
localhostor127.0.0.1 - For remote databases: The full hostname (e.g.,
db.example.comoryour-instance.region.rds.amazonaws.com)
- For local setup:
- Port: The MySQL port (default is
3306) - Database Name: The name of the database you want to connect to
- Username: Your MySQL username
- Password: Your MySQL password
- Folder (Optional): Select a destination folder in the file manager
- If not specified, data will be stored in the root directory
4. Configure SSL Settings (Optional)
For production databases:
- Enable SSL to ensure secure data transmission
- Ensure your MySQL server is configured to accept SSL connections
For local development:
- SSL can be disabled when connecting to
localhost
5. Create the Connection
After filling in all connection details, click Create Connection.
The system will:
- Test the database connection
- Verify credentials and permissions
- Begin the initial data synchronization
6. Monitor Sync Status
- Navigate to Data Synchronization to see the import progress
- The connector will download all tables from your MySQL database
- Each table will be imported as a separate file
7. Access Your Data
- Once the sync is complete, go to File Manager
- Navigate to the folder you specified (or root directory)
- You'll see a folder named after your database (e.g.,
mysql_data) - Inside, all tables from your database will be available as individual files
- Click on any file to preview the table data
- The data is now ready to use in your AI pipelines and flows
What Gets Imported:
- All tables in the specified database
- Complete table data with all rows and columns
- Table structure is preserved in a standardized format
Best Practices:
- Use SSL for production databases to ensure data security
- Use read-only database users when possible to prevent accidental modifications
- For large databases, consider creating a dedicated read replica to avoid impacting production performance
- Regularly monitor sync jobs to ensure data stays up-to-date