solution-diagram
14 items tagged with "solution-diagram"
Diagrams
AI based Chatbot with LLaMA Models and Microsoft Semantic Kernel
This diagram illustrates an idea of implementing an AI based chatbot for Kontext on Azure. It leverages open source frameworks and LLMs (large language models) to implement a RAG (retrieval augmented generation). The chatbot will get the domain specific information from Kontext search engine as context information and provided it into backend API service which leverages Microsoft Semantic Kernel to construct the question and send to LLM service to generate the answers and then sends back to frontend. Alternatives In the diagram, LLMs can be replaced by other models like Open AI services of Azure Open AI services or any other relevant language models. Microsoft Semantic Kernel can also be replaced with LangChain too. References microsoft/SemanticKernelCookBook: This is a Semantic Kernel's book for beginners (github.com) Llama2Chat | 🦜️🔗 Langchain SciSharp/LLamaSharp: Run local LLaMA/GPT model easily and fast in C#!🤗 It's also easy to integrate LLamaSharp with semantic-kernel, unity, WPF and WebApp. (github.com) blazorly/LangChain.NET (github.com)
Solution Diagram Example
Example application and analytics on Azure.
Kontext Architecture 2023 on Azure
This diagram shows how Kontext is designed and built with Azure and GitHub products. It only shows the core products used and some other products are not included in the diagram. For example, Azure Automation Account for batch data calculations and updates. Notes: Common Services are used by most of the layers. Azure Functions are commonly used for asynchronous actions. Azure Container Apps are used to host the ASP.NET Core application & APIs.
Azure Serverless Architecture for Web/Mobile Applications
This diagram shows you how to implement a serverless architecture on Azure. Reference Serverless web application - Azure Architecture Center | Microsoft Learn
AWS ETL Solution with Glue Diagram
This diagram shows one example of using AWS Glue to crawl, catalog and perform data stored in S3. Data landed in raw bucket is scanned by Glue Crawler and the metadata is stored in Glue Catalog. Glue ETL job loads the raw data and does transformations and eventually store the processed data in curated bucket. The processed files are scanned by Glue Crawler. Processed data is then queried by Amazon Athena. The data can be further utilized in reporting and dashboard.
AWS Big Data Lambda Architecture for Streaming Analytics
This diagram shows a typical lambda streaming processing solution on AWS with Amazon Kinesis, Amazon Glue, Amazon S3, Amazon Athena and Amazon Quicksight: Amazon Kinesis - capture streaming data via Data Firehose and then transform and analyze streaming data using Data Analytics; the result of analytics is stored into another Data Firehose process; for batch processing, the captured streaming data can be directly loaded into S3 bucket too. Amazon S3 - store streaming raw data and batch processed data. Amazon Glue - transform batch data in S3 and store the processed data into another bucket for consumption. Amazon Athena - used to read data in S3 via SQL. Amazon Quicksight - data visualization tool. References AWS IoT Streaming Processing Solution Diagram AWS IoT Streaming Processing Solution Diagram w Glue
AWS IoT Streaming Processing Solution Diagram w Glue
This diagram shows a typical streaming processing solution on AWS with Amazon Kinesis, Amazon Glue, Amazon S3, Amazon Athena and Amazon Quicksight: Amazon Kinesis - capture streaming data via Data Firehose and then load the data to S3. Amazon S3 - store streaming raw data and batch processed data. Amazon Glue - transform batch data in S3 and store the processed data into another bucket for consumption. Amazon Athena - used to read data in S3 via SQL. Amazon Quicksight - data visualization tool. Similar solution diagram using streaming transformation: AWS IoT Streaming Processing Solution Diagram.
AWS IoT Streaming Processing Solution Diagram
This diagram shows a typical streaming processing solution on AWS with Amazon Kinesis, Amazon S3, Amazon Athena and Amazon Quicksight: Amazon Kinesis - capture streaming data via Data Firehose and then transform and analyze streaming data using Data Analytics; the result of analytics is stored into another Data Firehose process. Amazon S3 - streaming processed data is stored in Amazon S3. Amazon Athena - used to read data in S3 via SQL. Amazon Quicksight - data visualization tool.
AWS Batch Processing Solution Diagram (using AWS Glue)
This diagram shows a typical batch processing solution on AWS with Amazon S3, AWS Lambda, Amazon Glue and Amazon Redshift: Amazon S3 is used to store staging data extracted from source systems on-premises or on-cloud. AWS Lambda is used to register data arrival in S3 buckets into ETL frameworks and trigger batch process process. Amazon Glueis then used to integrate data like merging, sorting, filtering, aggregations, transformations and load the data. Amazon Redshift is then used to store the transformed data. This diagram is forked from AWS Batch Processing Solution Diagram
AWS Batch Processing Solution Diagram
This diagram shows a typical batch processing solution on AWS with Amazon S3, AWS Lambda, Amazon EMR and Amazon Redshift: Amazon S3 is used to store staging data extracted from source systems on-premises or on-cloud. AWS Lambda is used to register data arrival in S3 buckets into ETL frameworks and trigger batch process process. Amazon EMR is then used to transform data like aggregations and load the data. Amazon Redshift is then used to store the transformed data. This pattern follow the traditional ETL pattern and you can change it to ELT pattern too to do transformations in Redshift directly. Amazon EMR can be replaced with many other products.
Kontext Campaign Email Delivery Engine
Kontext utilizes Azure Functions to build a serverless campaign emails delivery engine. This diagram highlights the high-level solution design.
Streaming Big Data to Azure Synapse via Azure Data Factory
This diagram shows a typical solution to stream big data to Azure Synapse: Data produced are streamed into Azure Event Hubs. Data in Event Hubs are captured in Azure Blob Storage via Event Hubs Capture feature. Once data capture is complete, an event is sent to Event Grid. Event Grid forwards the event information (with blob file path) to Azure Factory Pipeline (i.e. Event Grid triggered pipeline). Azure Data Factory uses event data (blob path) as source. Azure Data Factory sinks data to Azure Synapse SQL data warehouse for analytics.
Streaming Big Data to Azure Synapse via Azure Functions
This diagram shows a typical solution to stream big data to Azure Synapse: Data produced are streamed into Azure Event Hubs. Data in Event Hubs are captured in Azure Blob Storage via Event Hubs Capture feature. Once data capture is complete, an event is sent to Event Grid. Event Grid forwards the event information (with blob file path) to Azure Functions App (i.e. Event Grid triggered Functions App). Azure Functions App uses event data (blob path) to read the data. Azure Functions App loads data to Azure Synapse SQL data warehouse for analytics.
Mount Azure Storage Volumes to Container Group
Azure Files can be mounted to container groups in Azure Container Instances or App Services. To avoid latency, it is good practice to place the application containers in the same region as file storage.