Skip to main content
Version: Current 16.5-prerelease 🎯

Integration

PAT Token​

  • Login with admin user,
  • Go to [Setting],
  • Generate PAT with name,
  • Copy PAT token.

Collection​

API for import raw data, trigger pipeline​

headers["authorization"] = "pat " + token
event_data = {
'code': 'event_code', # raw topic name
'data': { # raw topic data
# ...data
},
'triggerType': 'insert' # always be insert for raw data
}

response = requests.post(
'http://localhost:8000/pipeline/data', data=json.dumps(event_data), headers=headers)
return response.json()
tip

Or use /pipeline/data/async, it returns once data saved and trigger pipelines processing asynchronously.

Via Other Middleware​

Kafka​

Send raw data to topic (KAFKA_TOPICS)

{
"code": "a_topic",
"data": {
// ...
},
"triggerType": "insert-or-merge",
"tenantId": "1",
"traceId": "1",
"pat": "..."
}

Visit here for description of properties. And for Kafka, one more property pat is required.

Rabbit Mq​

Send raw data to queue with exchange.

{
"code": "a_topic",
"data": {
// ...
},
"triggerType": "insert-or-merge",
"tenantId": "1",
"traceId": "1",
"pat": "..."
}

Visit here for description of properties. And for RabbitMQ, one more property pat is required.

Third Party BI​

info

Connect BI to Trino (If it is enabled) or storage directly.

And use Starburst ODBC driver,