
Provides a development environment (FlexWATCH® OPEN AI SDK 4.0) for porting and post-processing multi-level inference AI models.
“Customers can port AI models to IP cameras and post-process metadata without manufacturer assistance.”
“Manage finished products with proprietary AI models directly with a license key.”
Seyeon Tech supports an open AI (artificial intelligence) SDK to enable customers to secure their own AI IP cameras. Using the open AI (artificial intelligence) SDK, customers can port and utilize their own AI learning models to IP cameras. Traditionally, integrating proprietary AI technology into IP cameras was practically impossible for non-IP camera manufacturers.
Seyeon Tech provides a PC learning environment and a standardized conversion tool for porting AI learning models to IP cameras. The conversion tool is accessible via the web, allowing customers to port AI learning models to IP cameras and view the results. The advantage is that customers do not need to share their databases and learning models with Seyeon Tech. If AI technology needs to be integrated into the edge, it can be implemented into the camera using the "Open AI (Artificial Intelligence) SDK."
Today, we are releasing FlexWATCH® Open AI SDK 4.0, an evolution of the previously introduced SDK. 4.0 has three key features: support for a multi-stage inference development environment, provision of a proprietary metadata post-processing development environment, and management of proprietary AI models using license keys. Prior to 4.0, we primarily supported a single detection AI model. However, due to the rapidly increasing market demand for multi-stage inference AI models and proprietary metadata post-processing, we have incorporated this into the SDK. With this SDK release, customers can directly integrate desired AI features into Seyeon Tech's IP camera modules and finished products, and manage their own proprietary AI products with license keys.
Table of Contents:
1. Introducing FlexWATCH® OPEN AI SDK 4.0
2. Support for Multi-Stage Inference Development Environment
3. Support for Custom Development Environment for Metadata Postprocessing
4. License Key Support for Protecting Unique AI Models
5. Other
Overview:
1. Introducing FlexWATCH® OPEN AI SDK 4.0

The SDK provides instructions on how to configure an Ubuntu PC training environment for porting a proprietary AI model to an IP camera, as well as how to train using the specified AI model and port the results. The following is an overview of what's included in the SDK:
a. Preparation of the PC environment
b. Creating a Python virtual environment
c. MobilenetV2 + SSDLite training (Yolo support coming soon)
d. Converting the trained model to an ONNX file
e. Validating the trained model
f. Converting an ONNX file to a binary file working over a FlexWATCH® camera
g. Creating a compressed file (tar.gz) to upload to the FlexWATCH® camera
Seyon Tech supplies IP camera modules and finished products ranging from 2M to 12M. They support rolling shutter and global shutter, so you can select the right product for your application and port your proprietary AI technology to it. For high-resolution cameras, you can set a ROI based on the amount of computation and increase the frame rate by inferring within that area. (Some features are under development.)
2. Support for multi-stage inference development environment

a. The primary purpose of the multi-stage inference development environment is to develop License Plate Recognition (LPR). As shown in the figure above, license plates are grouped and segmented in Stage 1, and then LPR is performed using multiple subsequent AI models. This configuration can be utilized not only for LPR but also for various detection and recognition models.
- For example, when classifying animals, a model can be applied to segment birds and terrestrial animals into groups and then further segment them.
b. Post-processing, such as cropping, for applying metadata extracted from Stage 1 model (0) to Stage 2 has been standardized, and the following features are supported.
- Customers can develop their own post-processing for metadata extracted from Models 1, 2, 3, and 4 (the number of models can be increased or decreased) in the figure above.
- Support for basic frame-to-frame object tracking and filtering required for tracking.
- Support for transmission of final metadata after metadata post-processing according to ONVIF standards and proprietary standards.
c. For this purpose, an IP camera toolchain is provided along with an SDK.
3. Development environment support for metadata follow-up processing

a. AI model inference results must be post-processed. For example, in LPR, AI model results contain class information (numbers, letters) and location information. Post-processing these results is essential for accurate license plate determination and utilization.
b. Since this process varies by company, we provide an SDK development environment to enable customers to perform metadata post-processing on their own. If LPR is the goal, Seyeon Tech's approach can also be used.
c. For this purpose, we provide a toolchain that enables software development on RISC-V CPUs. (Separate NDA and agreement required)
d. By transmitting metadata via ONVIF or our own protocol, we can display bounding boxes, class information, and confidence information in a browser, and we also provide an API for various functions in applications.
4. License key management support to protect unique AI models

a. When a customer ports an AI model to an IP camera, they may be concerned about the potential distribution of their proprietary AI model. This SDK update allows customers to directly manage the license keys for their ported AI models. Each product requires a license key for operation, protecting their proprietary AI models.
b. For small quantities, customers can purchase individual modules or finished products and then port the AI model for distribution.
c. For large quantities, Seyeon Tech can create the AI model as firmware and distribute it. This process can also be managed using the license key.
Other

1. Seyeon Tech aims to provide a unique IP camera development environment for development-based customers seeking to utilize AI technology. This means sharing core technologies with third-party partners to enhance the usability of IP cameras.
2. Seyeon Tech IP cameras natively support simple detection and intelligent rules (e.g., auto-tracking).
3. This SDK 4.0 is available through an NDA.
4. Supported modules and finished products are as follows:
A. 2M, 5M, and 8M modules (over 10 models, including the EX2-307, EX2-335, and EX1-412, as well as modules supported by global shutters).
B. 2M, 5M, and 8M IP cameras (over 20 models, including the FW9709, FW9307, FW7940, FW7511, and FW7300 series, including dome, bullet, and PTZ cameras).

We bring you the latest news from Seyeon Tech.