Skip to content

Commit

Permalink
Merge pull request #182 from quixio/dev
Browse files Browse the repository at this point in the history
Docs Release 2023-07-002 (10 Jul 2023)
  • Loading branch information
tbedford authored Jul 10, 2023
2 parents ef2f237 + c3368aa commit b176521
Show file tree
Hide file tree
Showing 14 changed files with 168 additions and 29 deletions.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
35 changes: 35 additions & 0 deletions docs/platform/integrations/kafka/confluent-cloud.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Connect to Confluent Cloud

Quix requires Kafka to provide streaming infrastructure for your Quix workspace.

When you create a new Quix workspace, there are three hosting options:

1. **Quix Broker** - Quix hosts Kafka for you. This is the simplest option as Quix provides hosting and configuration.
2. **Self-Hosted Kafka** - This is where you already have existing Kafka infrastructure that you use, and you want to enable Quix to provide the stream processing platform on top of it. You can configure Quix to work with your existing Kafka infrastructure using this option.
3. **Confluent Cloud** - if you use Confluent Cloud for your Kafka infrastructure, then you can configure Quix to connect to your existing Confluent Cloud account.

This documentation covers the third hosting option, Confluent Cloud.

## Sign up for a Confluent Cloud account

If you do not already have Confluent Cloud account, you can [sign up for a free trial](https://www.confluent.io/confluent-cloud/tryfree/).

## Selecting Confluent Cloud to host Quix

When you create a new Quix workspace, you can select your hosting option in the `Broker settings` dialog, as shown in the following screenshot:

![Broker Settings](../../images/integrations/confluent/confluent-broker-settings.png)

Select the option `Connect to your Confluent Cloud`.

## Confluent Cloud setup guide

When you choose the `Connect to your Confluent Cloud` broker setting, the `Confluent Cloud Setup Guide` is displayed, as shown in the following screenshot:

![Broker Settings](../../images/integrations/confluent/confluent-cloud-setup.png)

All the required configuration information can be found in your Confluent Cloud account.

!!! note

If you already have topics created in your Confluent Cloud, you can synchronize these with your Quix workspace. The `Synchronize Topics` checkbox is enabled by default.
6 changes: 3 additions & 3 deletions docs/platform/tutorials/image-processing/connect-video-tfl.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
# 3. Connect the TfL video feeds
# 4. Connect the TfL video feeds

In this part of the tutorial you connect your pipeline to the TfL traffic cam video feeds.

Follow these steps to deploy the **traffic camera feed service**:

1. Navigate to the `Code Samples` and locate `TfL Camera Feed`.

2. Click `Setup & deploy`.
2. Click `Deploy`.

3. Paste your TfL API Key into the appropriate input.

4. Click `Deploy`.
4. Click `Deploy` again.

Deploying will start the service in the Quix pre-provisioned infrastructure. This service will stream data from the TfL cameras to the `tfl-cameras` topic.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@ Follow these steps to deploy the **webcam service**:

1. Navigate to the Samples and locate `Image processing - Webcam input`.

2. Click `Setup & deploy`.
2. Click `Deploy`.

3. Click `Deploy`.
3. Once again, click `Deploy`.

This service will stream data from your webcam to the `image-base64` topic.

Expand Down
Binary file modified docs/platform/tutorials/image-processing/images/web-ui.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
29 changes: 22 additions & 7 deletions docs/platform/tutorials/image-processing/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,13 @@ The following screenshot shows the pipeline you build in this tutorial:

![pipeline overview](./images/pipeline-overview.png)


This is the tutorial running live on Quix:
<div id="wrap">
<iframe id="frame" src="https://tfl-image-processing-ui-quix-realtimeimageprocessingtutorial.deployments.quix.ai/" title="Live real-time stream processing demo"></iframe>
</div>
You can interact with it here, on this page, or open the page to view it more clearly [here](https://tfl-image-processing-ui-quix-realtimeimageprocessingtutorial.deployments.quix.ai/){target="_blank"}.

## Getting help

If you need any assistance while following the tutorial, we're here to help in [The Stream community](https://join.slack.com/t/stream-processing/shared_invite/zt-13t2qa6ea-9jdiDBXbnE7aHMBOgMt~8g), our public Slack channel.
Expand Down Expand Up @@ -51,7 +58,7 @@ When you are logged into the Quix Portal, click on the `Code Samples` icon in th

## The pipeline you will create

There are four stages to the processing pipeline you build in this tutorial:
There are five stages to the processing pipeline you build in this tutorial:

1. Video feeds

Expand All @@ -66,7 +73,11 @@ There are four stages to the processing pipeline you build in this tutorial:

- Detect objects within images

4. Web UI configuration
4. Stream merge

- Merge the separate data streams into one

5. Web UI configuration

- A simple UI showing:

Expand All @@ -81,14 +92,18 @@ This tutorial is divided up into several parts, to make it a more manageable lea

1. **Connect the webcam video feed**. You learn how to quickly connect a video feed from your webcam, using a prebuilt sample.

2. **Object detection**. You use a computer vision sample to detect a chosen type of object. You'll preview these events in the live preview. The object type to detect can be selected through a web UI, which is described later.
2. **Decode images**. You decode the base64 encoded images coming from the webcam.

3. **Object detection**. You use a computer vision sample to detect a chosen type of object. You'll preview these events in the live preview. The object type to detect can be selected through a web UI, which is described later.

4. **Connect the TfL video feed**. You learn how to quickly connect the TfL traffic cam feeds, using a prebuilt sample. You can perform object detection across these feeds, as they are all sent into the objection detection service in this tutorial.

3. **Connect the TfL video feed**. You learn how to quickly connect the TfL traffic cam feeds, using a prebuilt sample. You can perform object detection across these feeds, as they are all sent into the objection detection service in this tutorial.
5. **Frame grabber**. You use a standard sample to grab frames from the TfL video feed.

4. **Frame grabber**. You use a standard sample to grab frames from the TfL video feed.
6. **Stream merge**. You use a standard sample to merge the different streams into one.

5. **Deploy the web UI**. You the deploy a prebuilt web UI. This UI enables you to select an object type to detect across all of your input video feeds. It displays the location pof object detection and object detection count on a map.
7. **Deploy the web UI**. You the deploy a prebuilt web UI. This UI enables you to select an object type to detect across all of your input video feeds. It displays the location pof object detection and object detection count on a map.

6. **Summary**. In this [concluding](summary.md) part you are presented with a summary of the work you have completed, and also some next steps for more advanced learning about the Quix Platform.
8. **Summary**. In this [concluding](summary.md) part you are presented with a summary of the work you have completed, and also some next steps for more advanced learning about the Quix Platform.

[Part 1 - Connect the webcam feed :material-arrow-right-circle:{ align=right }](connect-video-webcam.md)
6 changes: 3 additions & 3 deletions docs/platform/tutorials/image-processing/object-detection.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# 2. Object detection
# 3. Object detection

In this part of the tutorial you add an object detection service into the pipeline. This service detects objects in any video feeds connected to its input. This service uses a [YOLO v3](https://viso.ai/deep-learning/yolov3-overview/) machine learning model for object detection.

Expand All @@ -8,9 +8,9 @@ Follow these steps to deploy the **object detection service**:

1. Navigate to the `Code Samples` and locate `Computer Vision object detection`.

2. Click `Setup & deploy`.
2. Click `Deploy`.

3. Click `Deploy`.
3. Click `Deploy` again.

This service receives data from the `image-raw` topic and streams data to the `image-processed` topic.

Expand Down
48 changes: 48 additions & 0 deletions docs/platform/tutorials/image-processing/stream-merge.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# 6. Stream merge

In this part of the tutorial you add a stream merge service into the pipeline. This service merges the inbound streams into one outbound stream. This is required because the images from each traffic camera are published to a different stream, allowing the image processing services to be scaled up if needed. Once all the image processing is completed and to allow the UI to easily use the data generated by the processing stages, the data from each stream is merged into one stream.

Follow these steps to deploy the **Stream merge service**:

1. Navigate to the `Code Samples` and locate `Stream merge`.

2. Click `Deploy`.

3. And again, click `Deploy`.

This service receives data from the `image-processed` topic and streams data to the `image-processed-merged` topic.

??? example "Understand the code"

Here's the code in the file `quix_function.py`:

```python
# Callback triggered for each new event. (1)
def on_event_data_handler(self, stream_consumer: qx.StreamConsumer, data: qx.EventData):
print(data.value)

# All of the data received by this event data handler is published to the same predefined topic (2)
self.producer_topic.get_or_create_stream("image-feed").events.publish(data)

# Callback triggered for each new parameter data. (3)
def on_dataframe_handler(self, stream_consumer: qx.StreamConsumer, df: pd.DataFrame):

# Add a tag for the parent stream (4)
df["TAG__parent_streamId"] = self.consumer_stream.stream_id

# add the base64 encoded image to the dataframe (5)
df['image'] = df["image"].apply(lambda x: str(base64.b64encode(x).decode('utf-8')))

# All of the data received by this dataframe handler is published to the same predefined topic (6)
self.producer_topic.get_or_create_stream("image-feed") \
.timeseries.buffer.publish(df)
```

1. `on_event_data_handler` handles each new event on the topic that is subscribed to.
2. All events are published to the output topic in a single stream called `image-feed`.
3. `on_dataframe_handler` handles each new dataframe or timeseries data on the topic that is subscribed to.
4. Add a tag to preserve the parent stream id.
5. Add an `image` column to the dataframe and set the value to the base64 encoded image.
6. All data is published to the output topic in a single stream called `image-feed`.

[Part 7 - Web UI :material-arrow-right-circle:{ align=right }](web-ui.md)
3 changes: 2 additions & 1 deletion docs/platform/tutorials/image-processing/summary.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# 6. Summary
# 8. Summary

In this tutorial you have learned that it is possible to quickly build a real-time image processing pipeline, using prebuilt Code Samples. You have seen how to can connect to multiple types of video feed, perform object detection, and display the locations of the detected objects on a map, using the prebuilt UI.

Expand All @@ -10,6 +10,7 @@ Here is a list of the Quix open source Code Samples used in this tutorial, with
* [TfL traffic cam frame grabber](https://github.com/quixio/quix-samples/tree/main/python/transformations/TFL-Camera-Frame-Extraction)
* [Webcam interface](https://github.com/quixio/quix-samples/tree/main/applications/image-processing/webcam-input)
* [Computer vision object detection](https://github.com/quixio/quix-samples/tree/main/python/transformations/Image-processing-object-detection)
* [Stream merge](https://github.com/quixio/quix-samples/tree/develop/python/transformations/Stream-Merge)
* [Web UI](https://github.com/quixio/quix-samples/tree/main/nodejs/advanced/Image-Processing-UI)

## Next Steps
Expand Down
8 changes: 4 additions & 4 deletions docs/platform/tutorials/image-processing/tfl-frame-grabber.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# 4. Frame extraction
# 5. Frame extraction

In this part of the tutorial you add a frame extraction service.

Expand All @@ -8,10 +8,10 @@ Follow these steps to deploy the **frame extraction service**:

1. Navigate to the `Code Samples` and locate `TfL traffic camera frame grabber`.

2. Click `Setup & deploy`.
2. Click `Deploy`.

3. Click `Deploy`.
3. Click `Deploy` once more.

This service receives data from the `tfl-cameras` topic and streams data to the `image-raw` topic.

[Part 6 - Web UI :material-arrow-right-circle:{ align=right }](web-ui.md)
[Part 6 - Stream merge :material-arrow-right-circle:{ align=right }](stream-merge.md)
12 changes: 6 additions & 6 deletions docs/platform/tutorials/image-processing/web-ui.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# 5. Deploy the web UI
# 7. Deploy the web UI

In this part of the tutorial you add a service to provide a simple UI with which to monitor and control the pipeline.

Expand All @@ -14,15 +14,15 @@ Follow these steps to deploy the **web UI service**:

1. Navigate to the `Code Samples` and locate `TFL image processing UI`.

2. Click `Setup & deploy`.
2. Click `Deploy`.

3. Click `Deploy`.
3. Click `Deploy` again.

4. Once deployed, click the service tile.
4. Once deployed, click the service tile.

5. Click the `Public URL` to launch the UI in a new browser tab.
5. Click the `Public URL` to launch the UI in a new browser tab.

![image processing web UI](./images/ui-public-url.png)
![image processing web UI](./images/ui-public-url.png)

You have now deployed the web UI.

Expand Down
38 changes: 37 additions & 1 deletion docs/stylesheets/extra.css
Original file line number Diff line number Diff line change
Expand Up @@ -328,4 +328,40 @@ button.header-btn-inner:hover {
}
.md-code__button{
color: var(--md-default-fg-color--lightest);
}
}

.md-typeset iframe{
max-width: none !important;
}

#wrap
{
width: 689px;
height: 544px;
padding: 0;
overflow: hidden;
}

#frame
{
width: 1247px;
height: 1000px;
border: 0;

-ms-transform: scale(0.55);
-moz-transform: scale(0.55);
-o-transform: scale(0.55);
-webkit-transform: scale(0.55);
transform: scale(0.55);

-ms-transform-origin: 0 0;
-moz-transform-origin: 0 0;
-o-transform-origin: 0 0;
-webkit-transform-origin: 0 0;
transform-origin: 0 0;
}

.md-container{
height: calc(100vh - 72px) !important;
}

8 changes: 6 additions & 2 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,9 @@ nav:
- '3. Object detection': platform/tutorials/image-processing/object-detection.md
- '4. Connect TfL video': platform/tutorials/image-processing/connect-video-tfl.md
- '5. Frame grabber': platform/tutorials/image-processing/tfl-frame-grabber.md
- '6. Deploy the UI': platform/tutorials/image-processing/web-ui.md
- '7. Summary': platform/tutorials/image-processing/summary.md
- '6. Stream merge': platform/tutorials/image-processing/stream-merge.md
- '7. Deploy the UI': platform/tutorials/image-processing/web-ui.md
- '8. Summary': platform/tutorials/image-processing/summary.md
- 'Sentiment analysis':
- platform/tutorials/sentiment-analysis/index.md
- '1. Sentiment Demo UI': 'platform/tutorials/sentiment-analysis/sentiment-demo-ui.md'
Expand All @@ -84,6 +85,9 @@ nav:
- 'No code sentiment analysis': 'platform/tutorials/nocode-sentiment/nocode-sentiment-analysis.md'
- 'MATLAB and Simulink': 'platform/tutorials/matlab/matlab-and-simulink.md'
- 'Code Samples': 'platform/samples/samples.md'
- 'Integrations':
- 'Kafka':
- 'Confluent': 'platform/integrations/kafka/confluent-cloud.md'
- 'Connectors':
- platform/connectors/index.md
#ConnectorsGetInsertedHere
Expand Down

0 comments on commit b176521

Please sign in to comment.