<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	 xmlns:media="http://search.yahoo.com/mrss/" >

<channel>
	<title>Technologies &#8211; Nearshore Software Development Company &#8211; IT Outsourcing Services</title>
	<atom:link href="https://nearshore-it.eu/technologies/feed/" rel="self" type="application/rss+xml" />
	<link>https://nearshore-it.eu</link>
	<description>We are Nearshore Software Development Company with 14years of experience in delivering a large scale IT projects in the areas of PHP, JAVA, .NET, BI and MDM.</description>
	<lastBuildDate>Thu, 02 Apr 2026 11:34:42 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>SAP Business Data Cloud &#038; Databricks: from data fragmentation to enterprise AI</title>
		<link>https://nearshore-it.eu/articles/sap-business-data-cloud-databricks/</link>
					<comments>https://nearshore-it.eu/articles/sap-business-data-cloud-databricks/#respond</comments>
		
		<dc:creator><![CDATA[Piotr]]></dc:creator>
		<pubDate>Wed, 25 Mar 2026 13:05:45 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[Technologies]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Business Intelligence]]></category>
		<category><![CDATA[Databricks]]></category>
		<category><![CDATA[SAP]]></category>
		<guid isPermaLink="false">https://nearshore-it.eu/?p=37894</guid>

					<description><![CDATA[Less than 40% of business leaders report high confidence in their own data. In this article, we break down how SAP Business Data Cloud and Databricks address that gap — with insights from SAP, a 50% cost reduction case study from Rolls-Royce, and a practical guide to getting started.]]></description>
										<content:encoded><![CDATA[
<p>More than half of organizations struggle to keep data accurate and consistent. Less than 40% of leaders report high confidence in their own numbers. Yet the pressure to deliver AI-driven insights &#8211; in real time &#8211; has never been higher.</p>



<p>That was the starting point for Inetum&#8217;s webinar <em><a href="https://www.engage.inetum.com/sap-bdc-databricks-webinar-on-demand/" target="_blank" rel="noopener">SAP Business Data Cloud &amp; Databricks: Unlock AI &amp; Data potential</a></em>, which brought together practitioners from Inetum and special guests from SAP and Rolls-Royce. The session featured Jan Tretina (Ecosystem Development Manager, SAP), Sebastian Stefanowski (Databricks Practice Leader, Inetum), Raul Muñoz-Gutierrez (SAP Analytics Business Director, Inetum), and Andrew Lager (Program Manager and Digital Delivery Manager, Civil Digital and IT, Rolls-Royce), hosted by Oleh Hudym (SAP Growth Manager Manager, Inetum).</p>



<div class="table-of-contents">
    <p class="title"></p>
    <ol>
                    <li><a href="#less-than-40%-of-leaders-trust-their-own-data-and-that-gap-stalls-ai">1.  Less than 40% of leaders trust their own data &#8211; and that gap stalls AI</a></li>
                    <li><a href="#what-sap-business-data-cloud-is-and-how-it-closes-the-trust-gap">2.  What SAP Business Data Cloud is &#8211; and how it closes the trust gap</a></li>
                    <li><a href="#migration-bdc-works-with-what-organizations-already-have">3.  Migration: BDC works with what organizations already have</a></li>
                    <li><a href="#data-products-eliminate-the-80%-of-a-project-that-adds-no-value">4.  Data products eliminate the 80% of a project that adds no value</a></li>
                    <li><a href="#sap-databricks-and-standalone-databricks-the-same-engine-different-purpose">5.  SAP Databricks and standalone Databricks: the same engine, different purpose</a></li>
                    <li><a href="#six-years-with-databricks-at-rolls-royce:-50%-cost-reduction-and-ai-for-every-analyst">6.  Six years with Databricks at Rolls-Royce: 50% cost reduction and AI for every analyst</a></li>
                    <li><a href="#what-ai-on-sap-data-actually-looks-like">7.  What AI on SAP data actually looks like</a></li>
                    <li><a href="#flexible-consumption-licensing-that-moves-with-the-organization">8.  Flexible consumption: licensing that moves with the organization</a></li>
                    <li><a href="#what-is-coming-in-sap-business-data-cloud-in-2026">9.  What is coming in SAP Business Data Cloud in 2026</a></li>
                    <li><a href="#how-cfos-should-measure-roi-on-sap-business-data-cloud">10.  How CFOs should measure ROI on SAP Business Data Cloud</a></li>
                    <li><a href="#how-to-start-without-committing-to-a-licence-first">11.  How to start &#8211; without committing to a licence first</a></li>
            </ol>
</div>


<h2 class="wp-block-heading" id="less-than-40%-of-leaders-trust-their-own-data-and-that-gap-stalls-ai">Less than 40% of leaders trust their own data &#8211; and that gap stalls AI</h2>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p><em>&#8220;Less than 40% of leaders report high confidence in their data,&#8221;</em> Jan Tretina said at the opening of the session. <em>&#8220;That gap directly impacts both decision speed and innovation.&#8221;</em></p>
</blockquote>



<p>Three issues sit behind this figure. First, data quality: over half of organizations struggle to keep data accurate and consistent across systems. Second, misalignment between IT and business &#8211; finance teams need insights quickly, but IT landscapes can&#8217;t always deliver at that pace. Third, fragmentation: data spread across multiple systems, bringing it together in real time remains a major pain point.</p>



<p>The business consequence is concrete. Data-oriented organizations &#8211; those that have solved the trust problem &#8211; are, according to SAP&#8217;s analysis,<strong> four times more likely to succeed.</strong></p>



<h2 class="wp-block-heading" id="what-sap-business-data-cloud-is-and-how-it-closes-the-trust-gap">What SAP Business Data Cloud is &#8211; and how it closes the trust gap</h2>



<p>Jan Tretina described SAP Business Data Cloud through a flywheel: AI is only as strong as the data behind it, data is only valuable when it is trusted and accessible, and both require a resilient platform underneath.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p><em>&#8220;SAP Business Data Cloud is our unified data foundation,&#8221;</em> he said. <em>&#8220;It brings clean, connected, and trusted business data together &#8211; unifying data across SAP and non-SAP systems and making that data immediately usable for AI, analytics, and planning.&#8221;</em></p>
</blockquote>



<p>BDC consolidates SAP data services &#8211; SAP BW, SAP Datasphere, SAP Analytics Cloud, and extension partners including Databricks and Snowflake &#8211; under a single platform. The operational impact Tretina highlighted: instead of maintaining thousands of pipelines, custom integrations, and shadow data, BDC provides one consistent data foundation across all use cases. It embraces non-SAP data, open standards, and a broad partner ecosystem, letting organizations build multi-vendor landscapes without losing control of their data. This architecture makes BDC relevant across sectors &#8211; from retailers to manufacturing to financial services and banking.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img fetchpriority="high" decoding="async" width="1024" height="1024" src="https://nearshore-it.eu/wp-content/uploads/2026/03/concept-flywheel-bdc-_c-1.jpg" alt="concept flywheel bdc c 1" class="wp-image-37917" title="SAP Business Data Cloud &amp; Databricks: from data fragmentation to enterprise AI 1" srcset="https://nearshore-it.eu/wp-content/uploads/2026/03/concept-flywheel-bdc-_c-1.jpg 1024w, https://nearshore-it.eu/wp-content/uploads/2026/03/concept-flywheel-bdc-_c-1-300x300.jpg 300w, https://nearshore-it.eu/wp-content/uploads/2026/03/concept-flywheel-bdc-_c-1-150x150.jpg 150w, https://nearshore-it.eu/wp-content/uploads/2026/03/concept-flywheel-bdc-_c-1-768x768.jpg 768w, https://nearshore-it.eu/wp-content/uploads/2026/03/concept-flywheel-bdc-_c-1-495x495.jpg 495w, https://nearshore-it.eu/wp-content/uploads/2026/03/concept-flywheel-bdc-_c-1-395x395.jpg 395w, https://nearshore-it.eu/wp-content/uploads/2026/03/concept-flywheel-bdc-_c-1-675x675.jpg 675w, https://nearshore-it.eu/wp-content/uploads/2026/03/concept-flywheel-bdc-_c-1-900x900.jpg 900w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>
</div>


<div style="height:39px" aria-hidden="true" class="wp-block-spacer"></div>



<h2 class="wp-block-heading" id="migration-bdc-works-with-what-organizations-already-have">Migration: BDC works with what organizations already have</h2>



<p>A recurring concern when evaluating any new platform is the cost of transition &#8211; years of investment in SAP BW, Datasphere, or Analytics Cloud environments that can&#8217;t simply be discarded.</p>



<p>Raul Muñoz-Gutierrez addressed this directly. BDC includes all SAP data services within a single platform and is designed to absorb existing environments, not replace them. For SAP BW, BDC offers a &#8220;lift and shift&#8221; migration path &#8211; the existing environment moves in with data and connections preserved. </p>



<p>For SAP Datasphere or SAP Analytics Cloud, a &#8220;rewiring&#8221; process brings those solutions under the BDC umbrella without rebuilding. <em>&#8220;All these tasks don&#8217;t require any work from the customers,&#8221;</em> Muñoz-Gutierrez confirmed. <em>&#8220;They can be carried out directly by SAP.&#8221;</em></p>



<h2 class="wp-block-heading" id="data-products-eliminate-the-80%-of-a-project-that-adds-no-value">Data products eliminate the 80% of a project that adds no value</h2>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p><em>&#8220;Do you know the effort and time required in a data project to perform data extraction, loading and transformation, as well as reconciling all the information?&#8221;</em> Raul Muñoz-Gutierrez asked during the session. <em>&#8220;All that work, that normally is the 80% of the project, is partially eliminated through a SAP data product.&#8221;</em></p>
</blockquote>



<p>SAP data products contain the main data from key functional areas &#8211; financial, HR, supply chain &#8211; and they inherit all the data semantics from SAP. This makes them usable not just for reporting, but for AI and machine learning development from day one.</p>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img decoding="async" width="1296" height="641" src="https://nearshore-it.eu/wp-content/uploads/2026/03/data-products-8020-v1-1296x641.jpg" alt="data products 8020 v1" class="wp-image-37927" title="SAP Business Data Cloud &amp; Databricks: from data fragmentation to enterprise AI 2" srcset="https://nearshore-it.eu/wp-content/uploads/2026/03/data-products-8020-v1-1296x641.jpg 1296w, https://nearshore-it.eu/wp-content/uploads/2026/03/data-products-8020-v1-300x148.jpg 300w, https://nearshore-it.eu/wp-content/uploads/2026/03/data-products-8020-v1-768x380.jpg 768w, https://nearshore-it.eu/wp-content/uploads/2026/03/data-products-8020-v1-495x245.jpg 495w, https://nearshore-it.eu/wp-content/uploads/2026/03/data-products-8020-v1-1320x653.jpg 1320w, https://nearshore-it.eu/wp-content/uploads/2026/03/data-products-8020-v1.jpg 1456w" sizes="(max-width: 1296px) 100vw, 1296px" /></figure>
</div>


<div style="height:33px" aria-hidden="true" class="wp-block-spacer"></div>



<p>For CFOs specifically, the Financial Intelligence Package activates intelligent applications on SAP Analytics Cloud while simultaneously triggering data loading from SAP S/4HANA into BDC data products. <em>&#8220;Regularly every 30 minutes, the data is going to travel from our SAP S/4 system to our BDC,&#8221;</em> Muñoz-Gutierrez explained. Treasury management, financial planning, and forecasting become available and AI-ready from the moment of activation &#8211; with the option to build custom AI and machine learning on top.</p>



<h2 class="wp-block-heading" id="sap-databricks-and-standalone-databricks-the-same-engine-different-purpose">SAP Databricks and standalone Databricks: the same engine, different purpose</h2>



<p>If Databricks is already in the organization&#8217;s stack &#8211; or under evaluation &#8211; a natural question arises: how does SAP Databricks within BDC relate to the standard Databricks platform?</p>



<p>Sebastian Stefanowski, whose team works with both, explained the distinction. SAP Databricks is a modified release of the Databricks platform, tightly integrated with BDC &#8211; authentication, authorization, and billing are all managed through SAP, using SAP compute units. The most important aspect of the integration, in Stefanowski&#8217;s words, is a dedicated connector enabling zero-copy data sharing between SAP data products and the Databricks catalog: SAP financial data becomes directly queryable in Databricks without replication or transformation overhead.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p><em>&#8220;If SAP data is at the heart of your business, and financial data is really central to what you do, then SAP Databricks is definitely worth serious consideration,&#8221;</em> Stefanowski said. <em>&#8220;It would enable most of the useful features of Databricks with, generally speaking, no technological barrier.&#8221;</em></p>
</blockquote>



<p>The trade-off is feature breadth. Organizations with IoT streaming requirements, multi-stage declarative pipeline orchestration (DLT), or a need to manage their own compute clusters will find standalone Databricks richer &#8211; with dedicated connectors for external cloud services and platforms like Salesforce. In SAP Databricks, those broader integrations run through SAP. Stefanowski&#8217;s summary was clear: for general data integration scenarios, standalone Databricks; for organizations where SAP financial data is the core, SAP Databricks removes the barriers.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p><em>&#8220;What seemed to be impossible a few years back in terms of AI capabilities is now possible with this amazing partnership between SAP and Databricks,&#8221;</em> Oleh Hudym noted during the session.</p>
</blockquote>



<h2 class="wp-block-heading" id="six-years-with-databricks-at-rolls-royce:-50%-cost-reduction-and-ai-for-every-analyst">Six years with Databricks at Rolls-Royce: 50% cost reduction and AI for every analyst</h2>



<p>Andrew Lager has been on the Databricks journey at Rolls-Royce for over six years, including as an early adopter of many of its latest technologies &#8211; and he came to the webinar with numbers.</p>



<p><em>&#8220;We&#8217;ve reduced costs upwards of 50% compared to our previous solution,&#8221;</em> he said, referring to the migration from a legacy warehouse to Databricks Lakehouse for engine health monitoring data. Three factors drove that reduction: cheaper compute via Spark, schema evolution that prevents jobs from breaking when table structures change during migrations, and Unity Catalog &#8211; which centralizes data access across sources so Lager can produce management information for internal stakeholders without involving additional teams.</p>



<!-- CTA: Webinar on demand — mid-article -->
<div style="border-left:4px solid #00978a;background:#f7f8fc;padding:20px 24px;margin:32px 0;border-radius:0 4px 4px 0;">
  <p style="margin:0 0 6px;font-size:11px;letter-spacing:1.5px;text-transform:uppercase;color:#00978a;font-weight:700;">Webinar on demand</p>
  <p style="margin:0 0 10px;font-size:18px;font-weight:700;color:#0d1b2a;line-height:1.3;">The full Rolls-Royce story &#8211; and how to replicate it</p>
  <p style="margin:0 0 16px;font-size:15px;color:#444c5e;line-height:1.6;">Andrew Lager walks through six years of Databricks at Rolls-Royce &#8211; the architectural decisions, the AI features that changed daily operations, and the live Q&amp;A with SAP, Databricks, and Inetum practitioners.</p>
  <a href="https://www.engage.inetum.com/sap-bdc-databricks-webinar-on-demand/" style="display:inline-block;background:#00978a;color:#ffffff;padding:10px 22px;border-radius:4px;font-weight:600;font-size:14px;text-decoration:none;" target="_blank" rel="noopener">Watch the recording →</a>
</div>




<p>On the AI side, his example was equally direct. <em>&#8220;Those activities that would take me days can take me five minutes now,&#8221;</em>  he said, describing Databricks AI Genie &#8211; a feature that lets non-technical users query datasets in natural language instead of SQL. </p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p><em>&#8220;The biggest benefit, not just for me but for Rolls-Royce in general, has been opening up technical solutions to less technical people.&#8221;</em></p>
</blockquote>



<h2 class="wp-block-heading" id="what-ai-on-sap-data-actually-looks-like">What AI on SAP data actually looks like</h2>



<p>Sebastian Stefanowski described Databricks as a complete environment for AI and machine learning, where SAP BDC data products are exposed as tables that plug directly into data pipelines. He grouped the platform&#8217;s AI and ML capabilities into several practical areas:</p>



<ul class="wp-block-list">
<li><strong>AI Playground</strong> &#8211; run any off-the-shelf or externally sourced model on a serverless runtime; create custom prompts and test hypotheses before committing to a build</li>



<li><strong>Model serving endpoints</strong> &#8211; host and serve custom models at scale</li>



<li><strong>AI agent framework</strong> &#8211; build agentic solutions quickly, with a built-in vector search database for RAG implementations</li>



<li><strong>MLflow</strong> &#8211; manage and monitor the full training lifecycle; compare experiments and select the best-performing model</li>



<li><strong>AutoML</strong> &#8211; Databricks runs parallel experiments across multiple model architectures on your data automatically, then selects the model that best fits your test results</li>



<li><strong>AI functions in SQL</strong> &#8211; call AI models directly from a SQL SELECT statement and store predictions as part of the query output</li>



<li><strong>AI Gateway</strong> &#8211; govern model usage centrally: block unauthorized access, filter sensitive queries, and maintain full oversight of what models run and on what data</li>
</ul>



<p>He then connected those capabilities to the most common use cases for SAP data: cash flow forecasting, prices forecasting, and stock level optimization &#8211; blending SAP financial and operational data with seasonality data, interest rate history, and interest rate predictions to anticipate future costs, prices, and inventory needs. Databricks AutoML fits naturally here, running statistical algorithms in parallel and surfacing the model that best fits historical test data.</p>



<p>Beyond forecasting, LLM integration opens a second category: automatically generating product descriptions from structured SAP product data; running sentiment analysis on customer feedback to understand the reasoning behind customer behavior; and identifying the most promising customers to approach based on that analysis. </p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p><em>&#8220;All these features can be really, really nicely integrated with SAP and blended with SAP data,&#8221;</em> Stefanowski said. <em>&#8220;I think that&#8217;s something which will bring SAP customers to another level.&#8221;</em></p>
</blockquote>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img decoding="async" width="1296" height="707" src="https://nearshore-it.eu/wp-content/uploads/2026/03/chapter-ai-neural-v1_c-1296x707.jpg" alt="chapter ai neural v1 c" class="wp-image-37930" title="SAP Business Data Cloud &amp; Databricks: from data fragmentation to enterprise AI 3" srcset="https://nearshore-it.eu/wp-content/uploads/2026/03/chapter-ai-neural-v1_c-1296x707.jpg 1296w, https://nearshore-it.eu/wp-content/uploads/2026/03/chapter-ai-neural-v1_c-300x164.jpg 300w, https://nearshore-it.eu/wp-content/uploads/2026/03/chapter-ai-neural-v1_c-768x419.jpg 768w, https://nearshore-it.eu/wp-content/uploads/2026/03/chapter-ai-neural-v1_c-495x270.jpg 495w, https://nearshore-it.eu/wp-content/uploads/2026/03/chapter-ai-neural-v1_c-1320x720.jpg 1320w, https://nearshore-it.eu/wp-content/uploads/2026/03/chapter-ai-neural-v1_c.jpg 1408w" sizes="(max-width: 1296px) 100vw, 1296px" /></figure>
</div>


<div style="height:28px" aria-hidden="true" class="wp-block-spacer"></div>



<h2 class="wp-block-heading" id="flexible-consumption-licensing-that-moves-with-the-organization">Flexible consumption: licensing that moves with the organization</h2>



<p>Raul Muñoz-Gutierrez described the BDC commercial model as one of its differentiators. <em>&#8220;SAP Business Data Cloud offers a unit subscription model that provides customers with flexible subscription pricing,&#8221;</em> he said. <em>&#8220;This allows them to subscribe to the services they currently need and easily modify them in the future.&#8221;</em></p>



<p>The example he gave: an organization starts with SAP BW integrated into BDC. Once that environment has been migrated to SAP Datasphere, the BW capacity is removed and reallocated &#8211; to SAP Databricks or SAP Snowflake, for instance. A separate data services licence layer complements the platform subscriptions, enabling activation of analytical applications with near real-time data, development of custom data products, and external data sharing via BDC Connect &#8211; the service that exposes SAP data to Databricks, Snowflake, and Microsoft Fabric through Delta Sharing.</p>



<h2 class="wp-block-heading" id="what-is-coming-in-sap-business-data-cloud-in-2026">What is coming in SAP Business Data Cloud in 2026</h2>



<p>Jan Tretina outlined SAP&#8217;s roadmap across three areas.</p>



<p><strong>First, deeper openness and connectivity: </strong>BDC Connect is expanding to Databricks, Snowflake, Google Cloud, Microsoft Fabric, and AWS. Bi-directional data sharing with SAP HANA Cloud is also in development &#8211; data flowing both ways, removing silos at the source rather than managing them downstream.</p>



<p><strong>Second, data products:</strong> SAP is expanding coverage across more lines of business and releasing Data Product Studio, which will make modeling SAP and non-SAP data into governed, shareable data products <em>&#8220;much easier than ever before&#8221;</em> &#8211; turning data into <em>&#8220;a true asset others can consume and trust,&#8221;</em> in Tretina&#8217;s words.</p>



<p><strong>Third, AI-native capabilities built directly into the data layer:</strong> a new AI Hub with new models, enhancements to the SAP HANA Cloud Knowledge Graph, and a fully agentic multi-modal database &#8211; including memory for agents. <em>&#8220;This is the database AI was really looking for,&#8221;</em> Tretina said. </p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p><em>&#8220;We are building the data foundation every enterprise will need to power next-generation AI. 2026 will be a breakthrough year for BDC.&#8221;</em></p>
</blockquote>



<h2 class="wp-block-heading" id="how-cfos-should-measure-roi-on-sap-business-data-cloud">How CFOs should measure ROI on SAP Business Data Cloud</h2>



<p>In the Q&amp;A, Jan Tretina identified four areas where BDC generates measurable returns for finance leaders.</p>



<p><strong>Efficiency and proactivity.</strong> Fewer manual reconciliations, reduced data preparation effort, faster close-to-forecast cycles.</p>



<p><strong>Data trust.</strong> Higher data quality and fewer errors &#8211; directly addressing the confidence gap he described at the start of the session.</p>



<p><strong>Agility.</strong> Faster time to insight and quicker scenario iterations.</p>



<p><strong>Business impact.</strong> Improved forecast accuracy and tangible cost savings from AI-driven recommendations.</p>



<p><em>&#8220;CFOs should look at gains in proactivity, trust, and measurable financial outcomes,&#8221;</em> Tretina said. <em>&#8220;With SAP BDC, the ROI is visible both in operational efficiency and in the quality of decisions which can power the business.&#8221;</em></p>



<h2 class="wp-block-heading" id="how-to-start-without-committing-to-a-licence-first">How to start &#8211; without committing to a licence first</h2>



<p>For organizations that want to evaluate BDC before committing to a full rollout, Raul Muñoz-Gutierrez described a concrete entry point. Inetum has three years of experience with SAP Datasphere and related environments, and operates its own SAP Business Data Cloud environment &#8211; available to run proof-of-concept scenarios with customers in a live setting, testing whether a target architecture fits their real needs before any licence investment.</p>



<p><em>&#8220;You don&#8217;t have to invest in a licence right now,&#8221;</em> Muñoz-Gutierrez said. <em>&#8220;You have to know what you want, and we are the best company to accompany you to this final scenario &#8211; because we have SAP Data Specialists and also global data expertise from non-SAP solutions.&#8221;</em></p>



<p>On the Databricks side, Sebastian Stefanowski was unambiguous: <em>&#8220;Databricks is for everyone.&#8221;</em> The platform runs on a pay-per-use model &#8211; if workloads run once a day for an hour, the cost reflects that hour. If the platform is idle, the cost goes to zero. <em>&#8220;The openness and scalability of this platform &#8211; which actually scales down to zero &#8211; that is what should encourage any company who wants to start cloud data processing,&#8221;</em> he said.</p>



<p>The message across the webinar was consistent: <strong>before AI delivers value at scale, organizations need a data foundation that is clean, connected, and trusted.</strong> SAP Business Data Cloud is built to be that foundation. Databricks extends what can be done on top of it. And when the two are combined with the right approach, the gap between fragmented enterprise data and working AI closes faster than most organizations expect.</p>



<hr>

<p><strong>The full session is available on demand.</strong></p>
<p>SAP, Databricks, Rolls-Royce, and Inetum practitioners covered migration strategy, data product architecture, and AI implementation &#8211; including the unedited live Q&amp;A. <a href="https://www.engage.inetum.com/sap-bdc-databricks-webinar-on-demand/" style="color:#00978a;font-weight:600;" target="_blank" rel="noopener">Watch the webinar on demand →</a></p>

<p>To run a proof of concept in Inetum&#8217;s own SAP Business Data Cloud environment &#8211; without upfront licence commitment &#8211; <a href="https://www.engage.inetum.com/data-and-ai-contact/" target="_blank" rel="noopener">talk to our team →</a></p>

]]></content:encoded>
					
					<wfw:commentRss>https://nearshore-it.eu/articles/sap-business-data-cloud-databricks/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The rise of AI agents – your new digital coworkers </title>
		<link>https://nearshore-it.eu/articles/ai-agents-your-new-digital-coworkers/</link>
					<comments>https://nearshore-it.eu/articles/ai-agents-your-new-digital-coworkers/#respond</comments>
		
		<dc:creator><![CDATA[Wiktor Zdzienicki]]></dc:creator>
		<pubDate>Tue, 05 Aug 2025 12:58:04 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[Technologies]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Agents]]></category>
		<guid isPermaLink="false">https://nearshore-it.eu/?p=37412</guid>

					<description><![CDATA[In this article, we’ll walk through what they are, how they’re built, where they add operational value, what risks they introduce, and how to launch them thoughtfully. ]]></description>
										<content:encoded><![CDATA[
<p>We live in an era of rapid technological change. In business, <strong>AI agents</strong> have evolved from supporting roles into <strong>autonomous collaborators</strong>. These digital companions handle routine tasks, synthesize complex data, and execute multi-step workflows – with minimal supervision.</p>



<div class="table-of-contents">
    <p class="title">Go to:</p>
    <ol>
                    <li><a href="#AI-Applications-Reshaping-Business-in-2025-">1.  AI Applications Reshaping Business in 2025 </a></li>
                    <li><a href="#What-is-an-AI-agent?">2.  What is an AI agent? </a></li>
                    <li><a href="#How-AI-agents-work?">3.  How AI agents work? </a></li>
                    <li><a href="#Implementing-AI-agents">4.  Implementing AI agents </a></li>
                    <li><a href="#Key-types-of-AI-agent-system">5.  Key types of AI agent system </a></li>
                    <li><a href="#Capabilities-and-risks-of-using-AI">6.  Capabilities and risks of using AI </a></li>
                    <li><a href="#Use-cases-across-the-enterprise">7.  Use cases across the enterprise </a></li>
                    <li><a href="#How-to-get-started-with-AI-agents?-Start-smart!">8.  How to get started with AI agents? Start smart!  </a></li>
                    <li><a href="#Understanding-AI-agents---summary">9.  Understanding AI agents &#8211; summary  </a></li>
            </ol>
</div>


<h2 class="wp-block-heading" id="AI-Applications-Reshaping-Business-in-2025-"><strong>AI Applications Reshaping Business in 2025</strong>&nbsp;</h2>



<p>Generative AI<strong> </strong>sparked a wave of revolution that shows no signs of slowing down. Traditional AI already had an estimated global value potential of  <strong>$11–18 trillion</strong>,<a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/seizing-the-agentic-ai-advantage" target="_blank" rel="noreferrer noopener"> according to McKinsey</a>. 2025 is undoubtedly the year of another LLM-based technology: agentic AI.  </p>



<p>By combining autonomous decision-making and system integration, AI agents transform generative AI into digital collaborators, gaining outstanding results.  In call centers alone, according to McKinsey, autonomous proactive agents can reduce resolution time <strong>by 60–90%, with 80% of</strong> cases being resolved automatically. While agents are commonly associated with chatbots, their applications extend far beyond that across many industries. I’ll aim to explain what agents are and explore typical use cases in selected sectors. &nbsp;</p>



<h2 class="wp-block-heading" id="What-is-an-AI-agent?"><strong>What is an AI agent?</strong>&nbsp;</h2>



<p>Welcome to a world where AI isn’t just helping – you might say it’s <em>working</em>. <strong>AI agents</strong> are intelligent systems that go beyond scripted interactions. Powered often by large language models, they connect natural language comprehension, data access, and task logic to act proactively and independently. Companies are increasingly willing to deploy AI agents which are advanced AI systems to get competitive advantage.&nbsp;&nbsp;</p>



<p><strong>Also read:</strong></p>



<ul class="wp-block-list">
<li> <a href="https://nearshore-it.eu/articles/ai-in-project-management-ai-agents/" data-type="post" data-id="37378">AI in project management</a></li>



<li><a href="https://nearshore-it.eu/articles/ai-coding-agent/">Revolutionizing coding with AI coding agent</a></li>
</ul>



<h3 class="wp-block-heading"><strong>How can companies use AI agents?</strong>&nbsp;</h3>



<p>Imagine a system that not only understands your request – “prepare the sales forecast for Q3” – but fetches the data, runs the analysis, emails the report, and schedules a review meeting. These agents operate under specified goals, can learn over time, and may run visibly – through chat – or silently in the background. Their defining feature is <strong>goal-driven autonomy</strong>, transforming artificial assistants into digital coworkers.&nbsp;</p>



<h2 class="wp-block-heading" id="How-AI-agents-work?"><strong>How AI agents work?</strong>&nbsp;</h2>



<p>Think of agents as the “turnkey automation engines.” They follow a precise <strong>perception–decision–action</strong> loop: they observe signals (like system alerts or emails), decide using reasoning engines or LLMs, and then act – updating records, triggering workflows, or escalating tasks. Today’s agents also incorporate <strong>memory components</strong>, <strong>tool orchestration</strong>, and <strong>task coordination</strong>, enabling them to manage complex multi-step processes without intervention.&nbsp;</p>



<h2 class="wp-block-heading" id="Implementing-AI-agents"><strong>Implementing AI agents</strong>&nbsp;</h2>



<p>Modern implementations often use a <strong>hybrid architecture</strong>, featuring an orchestrator that directs specialized sub-agents. Picture a travel-booking agent working alongside calendar and ticketing agents – coordinating to book flights, reserve hotels, and notify participants. This multi-agent pattern is seen across enterprise automation platforms with growing frequency.&nbsp;&nbsp;</p>



<h2 class="wp-block-heading" id="Key-types-of-AI-agent-system"><strong>Key types of AI agent system</strong>&nbsp;</h2>



<p>There are at least four powerful archetypes shaping agentic AI excellence:&nbsp;</p>



<ul class="wp-block-list">
<li><strong>Reactive agents</strong> are like digital reflexes: they respond instantly to input, without memory or planning. Think about transaction alert monitors or system uptime watchdogs.&nbsp;&nbsp;</li>



<li><strong>Deliberative agents</strong> act like digital strategists: they reason and plan across multiple steps. Picture a forecasting assistant that builds internal models, evaluates scenarios, and adapts recommendations dynamically.&nbsp;&nbsp;</li>



<li><strong>Conversational agents</strong> serve as user-facing colleagues: they hold fluid, multi-turn dialogue, pull in context, and even take action – like scheduling meetings or generating analytics. Many organizations are now layering LLMs on top of enterprise chat platforms.&nbsp;&nbsp;</li>



<li><strong>Tool-using agents</strong> are digital operators: they connect to APIs or apps, generate charts, files, or record – bridging insight and outcome.&nbsp;&nbsp;</li>
</ul>



<p>In practice, these archetypes often collaborate – for example, a deliberative agent may trigger multiple tool-using agents to fulfill its plan, overseen by a reactive watchdog.  </p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td><br></style><div class="promotion-box promotion-box--image-right "><div class="tiles latest-news-once"><div class="tile"><div class="tile-image"><img decoding="async" src="https://nearshore-it.eu/wp-content/uploads/2024/06/BigCTA_MarekCzachorowski.jpg" alt="BigCTA MarekCzachorowski" title="The rise of AI agents – your new digital coworkers  4"></div><div class="tile-content"><p class="entry-title client-name promotion-box__headline2">Marek Czachorowski</p>
<p class="promotion-box__description2">Do you want to learn more about the process of building digital hubs? Let’s talk!</p>
<a class="btn btn-primary booking" href="https://outlook.office365.com/book/BookameetingwithMarek@gfi.fr/" target="_blank" rel="noopener">Book a meeting</a></div></div></div></div></td></tr></tbody></table></figure>



<h2 class="wp-block-heading" id="Capabilities-and-risks-of-using-AI"><strong>Capabilities and risks of using AI</strong>&nbsp;</h2>



<h3 class="wp-block-heading"><strong>Benefits of AI agents</strong>&nbsp;</h3>



<ul class="wp-block-list">
<li>AI agents are game-changers because they <strong>operate with autonomy</strong>, <strong>scale dynamically</strong>, <strong>learn from interaction</strong>, and <strong>integrate deeply</strong> with enterprise systems. AI agents offer a wide range of benefits.&nbsp;</li>



<li>Once launched, they relieve teams from repetitive work – running reporting, monitoring systems, or responding to emails – while maintaining 24/7 availability.&nbsp;</li>



<li>They reduce time spent, cut errors, and free professionals to focus on strategic tasks. In many early deployments, businesses have reported <strong>up to 50% gains in efficiency</strong>, especially in HR, finance, and customer service.&nbsp;&nbsp;</li>
</ul>



<h3 class="wp-block-heading"><strong>Responsible AI – risks of AI assistants</strong> </h3>



<ul class="wp-block-list">
<li>With this power comes responsibility. Not without a reason, there are many discussions on <strong>responsible AI.</strong> AI agents – especially those relying on LLMs – raise <strong>explainability concerns</strong>. Decision logs must be traceable, with chain-of-thought audits that shed light on agent reasoning.&nbsp;</li>



<li>Accountability needs to be clear: which human is overseeing the agent’s actions? Guardrails, access controls, and regular testing are essential to prevent data leaks or manipulation.&nbsp;</li>



<li>Lastly, transparency is key – users must know they’re interacting with AI, in line with regulations like GDPR and the EU AI Act.&nbsp;&nbsp;</li>
</ul>



<h2 class="wp-block-heading" id="Use-cases-across-the-enterprise"><strong>Use cases across the enterprise</strong>&nbsp;</h2>



<p>Let’s bring these capabilities to life with practical examples:&nbsp;</p>



<ul class="wp-block-list">
<li><strong>Customer Service – </strong>a telecom provider deployed conversational agents that resolve routine support requests, freeing human agents to focus on high-touch cases and increasing customer satisfaction.&nbsp;&nbsp;</li>



<li><strong>Finance &amp; Risk – </strong>a financial institution uses a combination of reactive agents for fraud alerts and deliberative agents for compliance task automation – later validated through real-time internal audits.&nbsp;&nbsp;</li>



<li><strong>Sales &amp; Marketing – </strong>startup sales teams use lead-scoring agents that automatically rank prospects, generate outreach templates, and initiate cadences – removing manual steps from campaign execution.&nbsp;&nbsp;</li>



<li><strong>HR –</strong> a global services firm adopted tool-using agents to manage onboarding paperwork, schedule training, and answer benefit inquiries – dramatically reducing admin time and increasing satisfaction.&nbsp;&nbsp;</li>



<li><strong>R&amp;D –</strong> life sciences company built agents that review clinical literature, extract findings, and surface key insights ahead of research meetings, cutting analyst prep time by 60%.&nbsp;&nbsp;</li>
</ul>



<p>Stories like these show how agents – working in context and across tools – are delivering efficiency, accuracy, and business agility.&nbsp;</p>



<h2 class="wp-block-heading" id="How-to-get-started-with-AI-agents?-Start-smart!"><strong>How to get started with AI agents? Start smart!&nbsp;</strong>&nbsp;</h2>



<p>Launching agents is as much about strategic discipline as it is about technology. Here’s our seven-step roadmap:&nbsp;</p>



<p>1. <strong>Identify a focused use case</strong> – one that is measurable and meaningful.&nbsp;</p>



<p>2. <strong>Run a short pilot (4–8 weeks)</strong> to surface technical and UX insights.&nbsp;</p>



<p>3. <strong>Ensure clear, accessible data</strong> to power reliable agent reasoning.&nbsp;</p>



<p>4. <strong>Build in human oversight</strong>, especially in critical areas like compliance or finance.&nbsp;</p>



<p>5. <strong>Measure rigorously</strong>, tracking effectiveness and user satisfaction.&nbsp;</p>



<p>6. <strong>Educate your team</strong> on agents’ roles, responses, and limits.&nbsp;</p>



<p>7. <strong>Scale gradually</strong>, expanding only after pilots prove reliable and beneficial.&nbsp;&nbsp;</p>



<p></p>



<h2 class="wp-block-heading" id="Understanding-AI-agents---summary"><strong>Understanding AI agents &#8211; summary&nbsp;</strong>&nbsp;</h2>



<ul class="wp-block-list">
<li>AI agents are autonomous software entities designed to perform tasks and make decisions within various environments. These agents can be categorized into types such as reactive, deliberative, conversational, and tool-using agents. &nbsp;</li>
</ul>



<ul class="wp-block-list">
<li>AI systems often deploy multiple AI agents that can work together, leveraging their unique strengths. For instance, an AI assistant can act as a specialized agent to help users with specific tasks, while other, may focus on data processing or decision-making.&nbsp;</li>
</ul>



<ul class="wp-block-list">
<li>AI agents are designed to interact with their environment and can adapt based on feedback, showcasing the difference between traditional AI and more complex agentic AI systems. By utilizing AI agents, organizations can leverage AI capabilities to address real-world problems more effectively.&nbsp;</li>
</ul>



<p>Check our website for more insights on how we develop <a href="https://www.inetum.com/en/ai-and-data" target="_blank" rel="noopener">AI &amp; Data projects!</a></p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td><br></style><div class="promotion-box promotion-box--image-left promotion-box--full-width-without-image"><div class="tiles latest-news-once"><div class="tile"><div class="tile-content"><p class="promotion-box__description2"><strong>Consult your project directly with a specialist</strong></p>
<a class="btn btn-primary booking" href="https://outlook.office365.com/book/BookameetingwithMarek@gfi.fr/" target="_blank" rel="noopener">Book a meeting</a></div></div></div></div></td></tr></tbody></table></figure>


<div id="rank-math-faq" class="rank-math-block">
<div class="rank-math-list ">
<div id="faq-question-1754300460517" class="rank-math-list-item">
<h3 class="rank-math-question "><strong>Does every AI agent use an LLM?</strong></h3>
<div class="rank-math-answer ">

<p>No. Some rely on rule-based logic or classical machine learning. LLMs are powerful but not mandatory. </p>

</div>
</div>
<div id="faq-question-1754300490722" class="rank-math-list-item">
<h3 class="rank-math-question "><strong>How long and costly is a pilot?</strong></h3>
<div class="rank-math-answer ">

<p>Typically 4–8 weeks and $15k–$60k, depending on integrations and scope.  </p>

</div>
</div>
<div id="faq-question-1754300516340" class="rank-math-list-item">
<h3 class="rank-math-question "><strong>How do agents differ from chatbots?</strong></h3>
<div class="rank-math-answer ">

<p>Chatbots answer interactively; agents act proactively, handle multi-step tasks, and integrate with other systems – more like digital coworkers. </p>

</div>
</div>
<div id="faq-question-1754300545468" class="rank-math-list-item">
<h3 class="rank-math-question "><strong>What are common pitfalls?</strong></h3>
<div class="rank-math-answer ">

<p>Avoid vague goals, poor data, lack of oversight, skipped testing, and zero user training – these erode trust and slow adoption. </p>

</div>
</div>
<div id="faq-question-1754300583405" class="rank-math-list-item">
<h3 class="rank-math-question "><strong>What makes AI responsible? </strong></h3>
<div class="rank-math-answer ">

<p>AI tools are essential for the responsible development and deployment of AI agents, ensuring they operate within ethical guidelines. Intelligent agents, such as advanced AI models and autonomous AI agents, can offer significant benefits when programmed to act responsibly and transparently. As AI agents improve and become more integrated into society, it is crucial to implement AI in ways that prioritize accountability and human oversight. </p>

</div>
</div>
</div>
</div>]]></content:encoded>
					
					<wfw:commentRss>https://nearshore-it.eu/articles/ai-agents-your-new-digital-coworkers/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Revolutionizing coding with the AI coding agent: code generation and agent mode with MCP (Model Context Protocol) </title>
		<link>https://nearshore-it.eu/articles/ai-coding-agent/</link>
					<comments>https://nearshore-it.eu/articles/ai-coding-agent/#respond</comments>
		
		<dc:creator><![CDATA[Eryk Schubert]]></dc:creator>
		<pubDate>Fri, 13 Jun 2025 11:18:01 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[Technologies]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Agents]]></category>
		<guid isPermaLink="false">https://nearshore-it.eu/?p=37313</guid>

					<description><![CDATA[ This article unpacks how that trio turbocharges productivity, why MCP is more than just another acronym, and where the tech is headed next. ]]></description>
										<content:encoded><![CDATA[
<p>Software engineering hasn&#8217;t seen a shake-up this dramatic since open source took off. Enter the AI coding agent &#8211; a self-directed teammate that drafts, debugs and reshapes code so you can stay focused on architecture and product goals. When you combine one of these agents with Visual Studio Code&#8217;s new agent mode and the open-standard Model Context Protocol (MCP), quick autocomplete hacks evolve into an end-to-end, full-stack workflow that&#8217;s revolutionizing coding practices.</p>



<div class="table-of-contents">
    <p class="title">Go to </p>
    <ol>
                    <li><a href="#What-is-an-AI-coding-agent">1.  What is an AI coding agent</a></li>
                    <li><a href="#Why-the-Model-Context-Protocol-(MCP)-matters?-">2.  Why the Model Context Protocol (MCP) matters? </a></li>
                    <li><a href="#Tackling-coding-challenges-with-AI-agents-">3.  Tackling coding challenges with AI agents </a></li>
                    <li><a href="#How-AI-can-automate-repetitive-coding-tasks-">4.  How AI can automate repetitive coding tasks </a></li>
                    <li><a href="#The-Future-of-Software-Development-with-AI-agents">5.  The Future of Software Development with AI Agents</a></li>
            </ol>
</div>


<h2 class="wp-block-heading" id="What-is-an-AI-coding-agent"><strong>What is an AI coding agent, and how does It work?</strong>&nbsp;</h2>



<p>An <a href="https://nearshore-it.eu/articles/ai-agents-your-new-digital-coworkers/">AI agent</a> represents a paradigm shift in software development, moving beyond traditional coding assistance to autonomous programming capabilities. It’s an autonomous <strong>agent</strong> that understands the broader project context, plans multi-step tasks, and executes them independently. Unlike simple tools that operate on a single file, an <strong>agent with code</strong> access has insight into the entire <strong>codebase</strong>, allowing it to perform complex refactoring and implement new features in a way that’s consistent with the existing architecture. </p>



<p>Its operation relies on advanced <strong>language models</strong> (<strong>LLMs</strong>) and a special <strong>agent framework</strong>, which allows it to interact with the development environment.&nbsp;</p>



<h3 class="wp-block-heading"><strong>AI coding agent ≠ chatty AI assistant</strong>&nbsp;</h3>



<p>Classical digital assistants are brilliant at sorting emails or juggling meetings. A coding agent, however, is built for serious engineering:&nbsp;</p>



<ul class="wp-block-list">
<li><strong>Whole-repo vision </strong>&#8211; not just the file you’re staring at.&nbsp;</li>
</ul>



<ul class="wp-block-list">
<li><strong>Multi-step planning</strong> &#8211; think “replace our auth pipeline” rather than single-line completions.&nbsp;</li>
</ul>



<ul class="wp-block-list">
<li><strong>Hands-on execution</strong> &#8211; it can run shell commands, execute tests and land patches &#8211; no human copy-paste required.&nbsp;</li>
</ul>



<p>GitHub’s Copilot Agent, for example, can launch a VM, clone your repo, add a feature, and open a pull request &#8211; all without leaving your IDE.&nbsp;</p>



<p><strong>How does agent mode in VS Code improve workflow?</strong>&nbsp;</p>



<p>Since April 2025 every VS Code install ships with an Agent panel snug beside the editor. The agent monitors compiler output, terminal logs and file saves, so its advice is always in tune with what you’re doing.&nbsp;</p>



<h3 class="wp-block-heading">Why devs love it:&nbsp;</h3>



<ul class="wp-block-list">
<li><strong>Inline roadmaps</strong> &#8211; before touching a line, the agent tells you the steps it plans to follow.&nbsp;</li>
</ul>



<ul class="wp-block-list">
<li><strong>Self-healing loops</strong> &#8211; if a test bombs, it iterates until green.&nbsp;</li>
</ul>



<ul class="wp-block-list">
<li><strong>Effortless tool-chaining</strong> &#8211; thanks to MCP, the agent can ping databases, cloud APIs or vector stores without you wiring things up manually.&nbsp;</li>
</ul>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="948" height="328" src="https://nearshore-it.eu/wp-content/uploads/2025/06/nearshore_2025.06.11_graphic_1.png" alt="Agent AI" class="wp-image-37334" title="Revolutionizing coding with the AI coding agent: code generation and agent mode with MCP (Model Context Protocol)  5" srcset="https://nearshore-it.eu/wp-content/uploads/2025/06/nearshore_2025.06.11_graphic_1.png 948w, https://nearshore-it.eu/wp-content/uploads/2025/06/nearshore_2025.06.11_graphic_1-300x104.png 300w, https://nearshore-it.eu/wp-content/uploads/2025/06/nearshore_2025.06.11_graphic_1-768x266.png 768w, https://nearshore-it.eu/wp-content/uploads/2025/06/nearshore_2025.06.11_graphic_1-495x171.png 495w" sizes="auto, (max-width: 948px) 100vw, 948px" /></figure>



<h2 class="wp-block-heading" id="Why-the-Model-Context-Protocol-(MCP)-matters?-"><strong>Why the Model Context Protocol (MCP) matters?</strong>&nbsp;</h2>



<p>LLM’s perform best with deep context. The Model Context Protocol is a vendor-neutral conduit that streams project graphs, tool specs and dynamic data into any AI agent. Anthropic’s open-source MCP servers expose file systems, cloud APIs or local Postgres databases without writing extra code.&nbsp;</p>



<h3 class="wp-block-heading"><strong>What MCP adds</strong>&nbsp;</h3>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td><strong>Advantage</strong>&nbsp;</td><td><strong>Impact</strong>&nbsp;</td></tr><tr><td>Deep context&nbsp;</td><td>Entire project graphs beat isolated snippets.&nbsp;</td></tr><tr><td>Real-time collaboration&nbsp;</td><td>One agent session adapts as multiple people edit.&nbsp;</td></tr><tr><td>Plug-in architecture&nbsp;</td><td>Drop-in MCP servers expose GitHub, AWS, or a local Postgres DB without writing glue code.&nbsp;</td></tr></tbody></table></figure>



<h3 class="wp-block-heading"><strong>Popular MCP servers</strong>&nbsp;</h3>



<p>The open-source community maintains a growing list of servers that expose everything from file systems to cloud APIs. I encourage you to check the list of available MCP servers: <a href="https://github.com/modelcontextprotocol/servers" target="_blank" rel="noreferrer noopener">modelcontextprotocol/servers: Model Context Protocol Servers</a>&nbsp;</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="546" height="375" src="https://nearshore-it.eu/wp-content/uploads/2025/06/nearshore_2025.06.11_graphic_2.png" alt="agent ai" class="wp-image-37331" title="Revolutionizing coding with the AI coding agent: code generation and agent mode with MCP (Model Context Protocol)  6" srcset="https://nearshore-it.eu/wp-content/uploads/2025/06/nearshore_2025.06.11_graphic_2.png 546w, https://nearshore-it.eu/wp-content/uploads/2025/06/nearshore_2025.06.11_graphic_2-300x206.png 300w, https://nearshore-it.eu/wp-content/uploads/2025/06/nearshore_2025.06.11_graphic_2-495x340.png 495w" sizes="auto, (max-width: 546px) 100vw, 546px" /></figure>



<h3 class="wp-block-heading"><strong>What are the advantages of using MCP for code generation?</strong>&nbsp;</h3>



<p>MCP transforms code generation from isolated snippets to context-aware, full-stack solutions. Instead of generating isolated fragments, the <strong>agent</strong> gains the ability to create coherent, contextual code. <strong>MCP</strong> allows <strong>agents that use</strong> it to create <strong>high-quality code</strong> that is consistent with the rest of the project.&nbsp;</p>



<h4 class="wp-block-heading">The main benefits include:&nbsp;</h4>



<ul class="wp-block-list">
<li><strong>Understand Complex Code Relationships:</strong> By accessing entire codebases through MCP, agents generate high-quality code that respects existing patterns, dependencies, and architectural decisions. The result is more efficient code.&nbsp;</li>
</ul>



<ul class="wp-block-list">
<li><strong>Complex code generation:</strong> With access to tools (tool calling) and external APIs, the agent can generate code that integrates with other systems and works with actual data.&nbsp;</li>
</ul>



<ul class="wp-block-list">
<li><strong>Improved code quality: </strong>The agent can use static analysis tools or run tests, ensuring that the code generated is not only functional but also adheres to best practices.&nbsp;</li>
</ul>



<h2 class="wp-block-heading" id="Tackling-coding-challenges-with-AI-agents-"><strong>Tackling coding challenges with AI agents</strong>&nbsp;</h2>



<p>Coding challenges can often be daunting, especially for beginners or when tackling unfamiliar problems. AI coding agents can provide step-by-step guidance, breaking down complex problems into manageable parts. AI coding agents can assist in several ways:&nbsp;</p>



<ul class="wp-block-list">
<li>Generate solutions for complex coding tasks&nbsp;</li>
</ul>



<ul class="wp-block-list">
<li>Instant feedback &amp; debugging&nbsp;&nbsp;</li>
</ul>



<ul class="wp-block-list">
<li>Adaptive learning&nbsp;</li>
</ul>



<ul class="wp-block-list">
<li>Repetitive tasks automation&nbsp;</li>
</ul>



<h3 class="wp-block-heading"><strong>Using AI to generate solutions for complex coding tasks</strong>&nbsp;</h3>



<p>When faced with a complex problem, developers can describe it to the AI agent. The agent won&#8217;t just suggest a different code snippet; it can build code as a complete solution, which the developer can then iteratively refine. Using the code interpreter tool, a developer can analyse step-by-step how the agent arrived at the final solution.&nbsp;</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1209" height="1034" src="https://nearshore-it.eu/wp-content/uploads/2025/06/nearshore_2025.06.11_graphic_3.png" alt="agent ai" class="wp-image-37340" title="Revolutionizing coding with the AI coding agent: code generation and agent mode with MCP (Model Context Protocol)  7" srcset="https://nearshore-it.eu/wp-content/uploads/2025/06/nearshore_2025.06.11_graphic_3.png 1209w, https://nearshore-it.eu/wp-content/uploads/2025/06/nearshore_2025.06.11_graphic_3-300x257.png 300w, https://nearshore-it.eu/wp-content/uploads/2025/06/nearshore_2025.06.11_graphic_3-768x657.png 768w, https://nearshore-it.eu/wp-content/uploads/2025/06/nearshore_2025.06.11_graphic_3-462x395.png 462w" sizes="auto, (max-width: 1209px) 100vw, 1209px" /></figure>



<h3 class="wp-block-heading"><strong>Debugging with code review agents: a new approach</strong>&nbsp;</h3>



<p>Debugging is traditionally a 20% diagnosis, 80% boredom. It can be one of the most frustrating and time-consuming aspects of coding. AI coding agents offer a fresh approach to this challenge.&nbsp;</p>



<ul class="wp-block-list">
<li>Developers can receive immediate feedback on their code, helping them identify errors or inefficiencies quickly.&nbsp;</li>
</ul>



<ul class="wp-block-list">
<li>By understanding the context of the code, the AI agent can provide more relevant debugging suggestions, making it easier to resolve issues.&nbsp;</li>
</ul>



<ul class="wp-block-list">
<li>As developers interact with the AI, it can learn from their debugging patterns, improving its suggestions over time.&nbsp;</li>
</ul>



<p>AI agents can automatically pinpoint the root cause of compilation errors, suggest precise fixes and even generate unit tests to prevent regressions.&nbsp;</p>



<h3 class="wp-block-heading"><strong>Adaptive learning and skill building</strong>&nbsp;</h3>



<p>For developers looking to improve their skills, AI coding agents act as intelligent tutors. They provide detailed explanations for their suggestions, helping users understand best practices while building coding experience through hands-on practice with real-world scenarios.&nbsp;</p>



<h2 class="wp-block-heading" id="How-AI-can-automate-repetitive-coding-tasks-"><strong>How AI can automate repetitive coding tasks</strong>&nbsp;</h2>



<p>Repetitive tasks can drain a developer&#8217;s productivity; boilerplate is the kryptonite of creativity. AI coding agents can relieve the pain in several ways.&nbsp;</p>



<ul class="wp-block-list">
<li>Project scaffolding: Single-prompt generation of project structures, build scripts, and configuration files&nbsp;</li>
</ul>



<ul class="wp-block-list">
<li>Context-aware completions: Intelligent suggestions that understand your codebase and use code patterns consistently&nbsp;</li>
</ul>



<ul class="wp-block-list">
<li>Automated documentation: Generated inline comments, API summaries, and README files that keep projects well-documented&nbsp;</li>
</ul>



<ul class="wp-block-list">
<li>Integration assistance: Streamlined API and library integration with automatic configuration and error handling&nbsp;</li>
</ul>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1076" height="584" src="https://nearshore-it.eu/wp-content/uploads/2025/06/nearshore_2025.06.11_graphic_4.png" alt="agent ai" class="wp-image-37337" title="Revolutionizing coding with the AI coding agent: code generation and agent mode with MCP (Model Context Protocol)  8" srcset="https://nearshore-it.eu/wp-content/uploads/2025/06/nearshore_2025.06.11_graphic_4.png 1076w, https://nearshore-it.eu/wp-content/uploads/2025/06/nearshore_2025.06.11_graphic_4-300x163.png 300w, https://nearshore-it.eu/wp-content/uploads/2025/06/nearshore_2025.06.11_graphic_4-768x417.png 768w, https://nearshore-it.eu/wp-content/uploads/2025/06/nearshore_2025.06.11_graphic_4-495x269.png 495w" sizes="auto, (max-width: 1076px) 100vw, 1076px" /></figure>



<h2 class="wp-block-heading" id="The-Future-of-Software-Development-with-AI-Agents"><strong>What is the Future of Software Development with AI Agents?</strong>&nbsp;</h2>



<p>The future of software development is likely to be heavily influenced by AI coding agents. It isn&#8217;t just about coding support from assistants like github copilot, but about fully autonomous agents. As these tools become more sophisticated, expect:&nbsp;</p>



<ul class="wp-block-list">
<li>tighter IDE integration, richer MCP toolchains and cross-agent collaboration &#8211; e.g., a design agent handing Figma specs to a coding agent that ships production React components overnight.&nbsp;</li>
</ul>



<ul class="wp-block-list">
<li>agents to mediate merge conflicts, enforce security policies, and hand off tasks between design, data and DevOps agents. Newcomers will onboard in hours, not weeks, because every repo ships with an embedded tutor.&nbsp;</li>
</ul>



<ul class="wp-block-list">
<li>and so on&#8230;&nbsp;</li>
</ul>



<p>Agentic AI will manage the entire application lifecycle – from planning, through coding, testing, and deployment, to monitoring.&nbsp;</p>



<h2 class="wp-block-heading"><strong>Conclusion</strong>&nbsp;</h2>



<p>Pairing an AI coding agent with VS Code’s Agent Mode and the Model Context Protocol turns development from a tooling grind into true collaboration. With boilerplate outsourced, debugging streamlined and project context unified, you’re free to focus on solving real problems. Fire up your editor, toggle Agent Mode, plug in an MCP server &#8211; and see what happens when AI shoulders the grunt work so you don’t have to.&nbsp;</p>



<h2 class="wp-block-heading"><strong>Frequently Asked Questions About AI Coding Agents</strong></h2>


<div id="rank-math-faq" class="rank-math-block">
<div class="rank-math-list ">
<div id="faq-question-1749732426010" class="rank-math-list-item">
<h3 class="rank-math-question "><strong>What’s the difference Between an AI Coding Agent and Traditional Coding Tools? </strong> </h3>
<div class="rank-math-answer ">

<p>Traditional tools do static syntax help; agents provide live, context-aware actions and learn from your patterns. </p>

</div>
</div>
<div id="faq-question-1749732428094" class="rank-math-list-item">
<h3 class="rank-math-question "><strong>How Secure is Code Generated by AI Agents?</strong> </h3>
<div class="rank-math-answer ">

<p>AI-generated code undergoes the same security scrutiny as human-written code. Modern agents incorporate security best practices and can even perform automated security audits. However, developers should always review generated code and apply appropriate testing before deployment. </p>

</div>
</div>
<div id="faq-question-1749732430476" class="rank-math-list-item">
<h3 class="rank-math-question "><strong>Can AI Agents Replace Human Developers?</strong> </h3>
<div class="rank-math-answer ">

<p>AI agents augment rather than replace human developers. They excel at routine tasks, code generation, and debugging assistance, but human creativity, strategic thinking, and domain expertise remain essential for successful software engineering projects. </p>

</div>
</div>
<div id="faq-question-1749732500131" class="rank-math-list-item">
<h3 class="rank-math-question "><strong>Can AI agents handle complex code execution?</strong> </h3>
<div class="rank-math-answer ">

<p>Yes. With MCP and code interpreter tools, agents can run Python code, execute terminal commands and integrate third-party APIs to perform actions end to end. </p>

</div>
</div>
<div id="faq-question-1749732523112" class="rank-math-list-item">
<h3 class="rank-math-question "><strong>Can an AI agent work with my existing code and repository?</strong> </h3>
<div class="rank-math-answer ">

<p>Yes. This is one of its key strengths. A modern AI coding agent is designed to work with existing code. Thanks to protocols like MCP, the agent can securely access an entire repository, analyse the codebase, and introduce code changes in a way that is consistent with the project. </p>

</div>
</div>
<div id="faq-question-1749732556682" class="rank-math-list-item">
<h3 class="rank-math-question "><strong>How does an AI coding agent differ from LLMs like those from OpenAI?</strong> </h3>
<div class="rank-math-answer ">

<p>An LLM (e.g., GPT-4) is a language engine – it can generate text and code based on input. An AI coding agent is a complete system that uses an LLM as its &#8220;brain&#8221; but adds an execution layer (an agent framework) that allows it to plan autonomously, interact with tools (MCP tools, APIs), and perform actions in a development environment. An agent is much more than just the AI model. </p>

</div>
</div>
<div id="faq-question-1749732631380" class="rank-math-list-item">
<h3 class="rank-math-question "><strong>Can I use AI models to build code in Visual Studio Code?</strong> </h3>
<div class="rank-math-answer ">

<p>Yes, you can use advanced AI models within Visual Studio Code. They provide seamless integration, allowing you to write code and resolve any issues directly in the VS Code environment. </p>

</div>
</div>
<div id="faq-question-1749732671790" class="rank-math-list-item">
<h3 class="rank-math-question "><strong>What is agent mode and how does it change the development tools landscape?</strong> </h3>
<div class="rank-math-answer ">

<p>Agent mode is a feature available in specific AI models that allows them to operate autonomously, executing tasks without constant human input, which significantly enhances productivity and streamlines the coding process. </p>

</div>
</div>
<div id="faq-question-1749732685780" class="rank-math-list-item">
<h3 class="rank-math-question "><strong>What common development tasks can AI agents assist with?</strong> </h3>
<div class="rank-math-answer ">

<p>AI agents can assist with a variety of common development tasks, including writing code, debugging, testing, and optimizing code quality, making the coding process more efficient and effective. </p>

</div>
</div>
</div>
</div>


<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://nearshore-it.eu/articles/ai-coding-agent/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Maximize Efficiency: Essential Asset Maintenance Management Strategy </title>
		<link>https://nearshore-it.eu/technologies/asset-maintenance-management-strategy/</link>
					<comments>https://nearshore-it.eu/technologies/asset-maintenance-management-strategy/#respond</comments>
		
		<dc:creator><![CDATA[-- Nie pokazuj autora --]]></dc:creator>
		<pubDate>Wed, 04 Dec 2024 14:36:08 +0000</pubDate>
				<category><![CDATA[Technologies]]></category>
		<category><![CDATA[Articles]]></category>
		<guid isPermaLink="false">https://nearshore-it.eu/?p=34121</guid>

					<description><![CDATA[Find out how to create an asset maintenance plan and what are the benefits of successful asset maintenance.  ]]></description>
										<content:encoded><![CDATA[
<div class="table-of-contents">
    <p class="title"></p>
    <ol>
                    <li><a href="What-CEOs-talked-about-in-Q3-2024?">1.  AI, outages, and more&#8230; What CEOs talked about in Q3 2024? </a></li>
                    <li><a href="What-is-asset-maintenance-management?">2.  What is asset maintenance management? </a></li>
                    <li><a href="What-is-reactive-maintenance?">3.  What is reactive maintenance?</a></li>
                    <li><a href="What-is-preventive-maintenance?">4.  What is preventive maintenance? </a></li>
                    <li><a href="What-is-CMMS-software?">5.  What is CMMS software?  </a></li>
                    <li><a href="Key-challenges-in-asset-maintenance">6.  Key challenges in asset maintenance</a></li>
                    <li><a href="Importance-of-asset-maintenance-plan-across-industries">7.  Importance of asset maintenance plan across industries</a></li>
                    <li><a href="Asset-maintenance-strategy">8.  Asset maintenance strategy</a></li>
                    <li><a href="Benefits-of-asset-maintenance-management-software">9.  Benefits of asset maintenance management software </a></li>
                    <li><a href="Asset-management-system-success-story">10.  Asset management system success story  </a></li>
            </ol>
</div>


<p>Over 80% of organizations experience unplanned downtime every three years, as per Forbes findings. At a time of economic uncertainty and the need to optimize costs, companies seek solutions to simplify asset management. Implementing programming is just the beginning of the challenges and decisions to be made.</p>



<h2 class="wp-block-heading" id="What-CEOs-talked-about-in-Q3-2024?"><strong>AI, outages, and more&#8230; What CEOs talked about in Q3 2024? </strong></h2>



<p>The end of 2024 has brought the next edition of the report on aspects of most interest to CEOs ‘What CEOs talked about’ <a href="https://iot-analytics.com/what-ceos-talked-about-in-q3-2024/" target="_blank" rel="noreferrer noopener">by IoT Analytics.</a> There are rarely any surprises, but the trends show what topics will most likely be invested in in 2025. The top three included:&nbsp;</p>



<p>1) <strong>Practical AI applications</strong> – interest in AI capabilities is high in any industry looking to improve performance and accelerate tasks. Process automation, digital twins, and AI-based predictive maintenance solutions are in the spotlight.&nbsp;</p>



<p>2)<strong> Renewable energy sources </strong>– in Q3 2024, interest in solar energy increased by almost 80% quarter-on-quarter and wind energy by almost 60%. Sustainability, green energy, and climate change are important themes globally, with some regional variations.&nbsp;</p>



<p>3)<strong> Unpredictable IT outages </strong>– 2024 brought an outage that was on everyone&#8217;s lips: the CrowdStrike / Microsoft related to cyber security. Unpredictable outages such as this can cost thousands of dollars per hour, depending on the industry. This is why companies are prioritizing solutions to prevent unforeseen failures, such as Enterprise Asset Management Solutions.&nbsp;</p>



<h2 class="wp-block-heading" id="What-is-asset-maintenance-management?"><strong>What is asset maintenance management? </strong></h2>



<p>Asset Maintenance Management or Enterprise Asset Management (EAM) is a comprehensive approach to managing an organization&#8217;s physical assets across their entire lifecycle. It encompasses the processes, tools, and systems used to plan, acquire, maintain, operate, and dispose of assets in a way that maximizes their value, minimizes costs, and ensures operational efficiency.&nbsp;&nbsp;</p>



<p>Asset Management is particularly relevant for organizations with significant investments in physical infrastructure or equipment, such as manufacturing, utilities, transportation, and healthcare.&nbsp;&nbsp;</p>



<h2 class="wp-block-heading" id="What-is-reactive-maintenance?"><strong>What is reactive maintenance?</strong>&nbsp;</h2>



<p>Reactive maintenance (also known as corrective maintenance) is the type of maintenance that involves repairing an asset once it is broken. This means responding to failures or breakdowns rather than taking preventive actions.&nbsp;</p>



<h2 class="wp-block-heading" id="What-is-preventive-maintenance?-"><strong>What is preventive maintenance?</strong>&nbsp;</h2>



<p>Preventive maintenance is a wide subject, and the general aim is to prevent breakdowns before they happen and thus take care of the asset lifecycle. One of the types of PM is<a href="https://nearshore-it.eu/articles/condition-based-maintenance-cbm-explained/" data-type="post" data-id="29044"> Condition-Based Monitoring</a>. In the CBM approach, activities take place based on machine condition-related data, collected through sensors.  </p>



<h2 class="wp-block-heading" id="What-is-CMMS-software?"><strong>What is CMMS software?&nbsp;</strong>&nbsp;</h2>



<p>CMMS stands for computerized maintenance management system. It refers to software that helps manage assets, plan maintenance work, and manage work orders. CMMs are inventory tracking systems that allow you to make informed decisions.&nbsp;&nbsp;</p>



<h2 class="wp-block-heading" id="key-challenges-in-asset-maintenance"><strong>What are the key challenges in asset maintenance management? </strong></h2>



<ol start="1" class="wp-block-list">
<li><strong>Identifying inefficient maintenance</strong> – relying on reactive asset management techniques adds costs and gives a dim view of investment in this area. Companies that are unaware of this problem struggle with a vast number of work orders, fail to see the value in equipment data, and try to address staff shortages by temporarily subletting contractors. At the same time, the main problem remains unresolved and results in further investment.  </li>
</ol>



<ol start="2" class="wp-block-list">
<li><strong>Creating an asset maintenance plan </strong>– creating a successful asset maintenance plan is no easy task. It&#8217;s a complex process that involves gathering asset inventory along with their value, calculating expected lifecycles, establishing SLAs and an optimum maintenance plan, and then sticking to the strategy over the long term.  </li>
</ol>



<ol start="3" class="wp-block-list">
<li><strong>Choosing asset management software </strong>– such software is designed to follow the life cycle of solutions, whether software, hardware, or any machine. A properly chosen solution increases confidence in data security. It also improves confidence in the devices themselves. There are many asset management solutions on the market, many of which offer extensive customization and integration with external systems and take advantage of the cloud. One example allowing you to build an EAM is the <a href="https://cumulocity.com/" target="_blank" rel="noreferrer noopener">Cumulocity IoT platform. </a> </li>
</ol>



<ol start="4" class="wp-block-list">
<li><strong>Training maintenance team</strong> –<strong> </strong>investing in your technicians is crucial for implementing a successful maintenance strategy that incorporates Computerized Maintenance Management Systems (CMMS). Great training leads to well-prepared staff that can handle the maintenance process with ease. The trained team can effectively perform maintenance tasks and adhere to a preventive maintenance schedule. This ensures that maintenance needs are addressed proactively, thus preventing asset failures, and extending the asset life. </li>
</ol>



<h2 class="wp-block-heading" id="Importance-of-asset-maintenance-plan-across-industries-"><strong>Importance of asset maintenance plan across industries</strong>&nbsp;</h2>



<p>Asset management solutions are critical in industries that rely heavily on them. Such as, for example:&nbsp;&nbsp;</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="756" height="732" src="https://nearshore-it.eu/wp-content/uploads/2024/12/nearshore_2023.11.28_graphic_1.png" alt="nearshore 2023.11.28 graphic 1" class="wp-image-34140" title="Maximize Efficiency: Essential Asset Maintenance Management Strategy  9" srcset="https://nearshore-it.eu/wp-content/uploads/2024/12/nearshore_2023.11.28_graphic_1.png 756w, https://nearshore-it.eu/wp-content/uploads/2024/12/nearshore_2023.11.28_graphic_1-300x290.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/12/nearshore_2023.11.28_graphic_1-408x395.png 408w" sizes="auto, (max-width: 756px) 100vw, 756px" /></figure>



<h3 class="wp-block-heading"><strong>Manufacturing&nbsp;</strong>&nbsp;</h3>



<p>IoT solutions collect real-time data from devices and parts for health monitoring purposes. This allows for the application of predictive maintenance solutions and the reduction of downtime and unnecessary repairs. With the right system, companies can schedule maintenance and engage maintenance teams to perform tasks more efficiently.&nbsp;&nbsp;</p>



<h3 class="wp-block-heading"><strong>Sales and procurement</strong>&nbsp;</h3>



<p>The area is focused on ensuring that the right assets are procured or produced at the right time to meet operational needs.&nbsp;&nbsp;</p>



<h3 class="wp-block-heading"><strong>Finance and controlling&nbsp;</strong>&nbsp;</h3>



<p>This area is related to financial planning, budgeting, and control mechanisms for managing the aspects of asset acquisition, its maintenance, and overall lifecycle costs.&nbsp;&nbsp;</p>



<h3 class="wp-block-heading"><strong>Compliance</strong>&nbsp;</h3>



<p>Regulatory adherence and compliance with legal standards related to asset management is an important part of managing organizational assets.&nbsp;&nbsp;</p>



<h2 class="wp-block-heading"><strong>How to develop an effective asset maintenance strategy?</strong>&nbsp;</h2>



<p>A successful and comprehensive Enterprise Asset Management strategy is built upon 3 main pillars:&nbsp;&nbsp;</p>



<p>1. Asset business management&nbsp;&nbsp;</p>



<p>2. Asset operations management&nbsp;&nbsp;</p>



<p>3. Asset service management&nbsp;&nbsp;</p>



<p>These areas are strictly connected and dependable upon each other to cover the whole landscape of the company&#8217;s asset-related activities from its “birth” (either by purchase or fabrication) until end-of-life and removal. The high degree of digitalization and automated integrations are crucial in taking advantage of this holistic approach.&nbsp;&nbsp;</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="756" height="698" src="https://nearshore-it.eu/wp-content/uploads/2024/12/nearshore_2023.11.28_graphic_2.png" alt="nearshore 2023.11.28 graphic 2" class="wp-image-34135" title="Maximize Efficiency: Essential Asset Maintenance Management Strategy  10" srcset="https://nearshore-it.eu/wp-content/uploads/2024/12/nearshore_2023.11.28_graphic_2.png 756w, https://nearshore-it.eu/wp-content/uploads/2024/12/nearshore_2023.11.28_graphic_2-300x277.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/12/nearshore_2023.11.28_graphic_2-428x395.png 428w" sizes="auto, (max-width: 756px) 100vw, 756px" /></figure>



<h2 class="wp-block-heading" id="benefits-of-asset-maintenance-management-software-"><strong>What are the benefits of asset maintenance management software</strong>&nbsp;</h2>



<ul class="wp-block-list">
<li><strong>Extended asset lifespan: </strong>By providing the tools to maintain assets proactively, EAM systems help organizations extend the useful life of their assets.&nbsp;&nbsp;</li>



<li><strong>Reduced downtime:</strong> Real-time monitoring, predictive maintenance, and better planning result in fewer breakdowns and less unplanned downtime.&nbsp;&nbsp;</li>



<li><strong>Cost efficiency:</strong> By optimizing maintenance schedules, reducing emergency repairs, and managing spare parts more effectively, EAM can significantly reduce operational and maintenance costs.&nbsp;&nbsp;</li>



<li><strong>Regulatory compliance: </strong>EAM systems ensure that organizations maintain compliance with industry standards and regulations, reducing legal and operational risks.&nbsp;&nbsp;</li>



<li><strong>Better decision-making:</strong> With better data and analytics on asset performance, usage, and costs, EAM systems provide insights that allow for smarter decision-making regarding asset investment, replacement, or optimization.&nbsp;&nbsp;</li>
</ul>



<p><strong>Want to dive deep into Enterprise Asset Management solutions? Prevent asset downtime with successful maintenance operations!</strong>&nbsp;<strong>Request a free demo presentation of a solution based on the Cumulocity IoT platform.</strong>&nbsp;</p>



</style><div class="promotion-box promotion-box--image-left "><div class="tiles latest-news-once"><div class="tile"><div class="tile-image"><img decoding="async" src="https://nearshore-it.eu/wp-content/uploads/2024/06/BigCTA_AndrzejGumieniak.jpg" alt="BigCTA AndrzejGumieniak" title="Maximize Efficiency: Essential Asset Maintenance Management Strategy  11"></div><div class="tile-content"><p class="entry-title client-name promotion-box__headline2">Streamline Your IoT Operations</p>
<p class="promotion-box__description2"><strong>Andrzej Gumieniak</strong>, our Head of Practice IoT, is here to help you navigate the complexities of IoT solutions. Book a consultation to discuss your case.</p>
<a class="btn btn-primary booking" href="https://outlook.office365.com/book/BookameetingwithAndrzej@gfi.fr/" target="_blank" rel="noopener">Schedule a meeting</a></div></div></div></div>



<h2 class="wp-block-heading" id="Asset-management-system-success-story"><strong>Asset management system success story</strong>&nbsp;</h2>



<h3 class="wp-block-heading"><strong>Wind turbines monitoring&nbsp;</strong>&nbsp;</h3>



<p>A leader in green energy and wind turbine manufacturing<strong> </strong>faced the challenge of operating thousands of wind turbines worldwide with only a few operators.&nbsp;&nbsp;</p>



<p>To achieve that, we helped them to build their command streamlined towards the devices and integrate everything into a central IoT solution based on Cumulocity IoT.&nbsp;&nbsp;</p>



<p><a href="https://nearshore-it.eu/wp-content/uploads/2024/11/Case_Wind_turbines_monitoring.pdf" target="_blank" rel="noreferrer noopener">Read the story!</a>&nbsp;</p>



<h3 class="wp-block-heading"><strong>Asset management – summary</strong>&nbsp;</h3>



<p>Asset management is crucial for organizations aiming to maximize the lifecycle of their assets. A comprehensive asset maintenance management plan ensures that maintenance activities are scheduled and executed effectively&nbsp;</p>



<p>This will allow you to make more informed decisions about needed maintenance work, whether that involves preventive and predictive maintenance or any other strategy.&nbsp;</p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://nearshore-it.eu/technologies/asset-maintenance-management-strategy/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>React Native mobile app development: comprehensive guide for cross-platform solutions</title>
		<link>https://nearshore-it.eu/articles/react-native-app-development-guide-for-mobile-app-development/</link>
					<comments>https://nearshore-it.eu/articles/react-native-app-development-guide-for-mobile-app-development/#respond</comments>
		
		<dc:creator><![CDATA[Adam Jurkiewicz]]></dc:creator>
		<pubDate>Thu, 24 Oct 2024 10:29:22 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[Technologies]]></category>
		<guid isPermaLink="false">https://nearshore-it.eu/?p=33658</guid>

					<description><![CDATA[Read our article to discover the advantages and disadvantages of using React Native, and find out when this approach can accelerate your software development process.]]></description>
										<content:encoded><![CDATA[
<p>Almost 15% of the top 500 apps installed in the US were created in React Native. React native development services can significantly enhance the speed of native app development. By utilizing this development framework, developers can create a cross-platform mobile app that runs seamlessly on both native iOS and Android devices. This approach allows a single codebase to be used throughout the development, which is not only efficient but also reduces costs associated with app using different technologies for different platforms. Read about the advantages and disadvantages of this approach and find out in which cases React Native can speed up the software development process.</p>



<div class="table-of-contents">
    <p class="title"></p>
    <ol>
                    <li><a href="#Mobile-app-development-projects-in-the-past-">1.  Mobile app development projects in the past </a></li>
                    <li><a href="#Why-React-Native-development-has-become-popular-">2.  Why React Native development has become popular </a></li>
                    <li><a href="#What-is-React-Native-app-development,-and-why-choose-it-for-your-development-process?-">3.  What is React Native app development, and why choose it for your development process? </a></li>
                    <li><a href="#React-native-framework-vs.-native-app-development">4.  React native framework vs. native app development: when to choose Swift or Kotlin for your app development project? </a></li>
                    <li><a href="#How-React-Native’s-development-time-and-cost-compare-to-native-mobile-app-development-">5.  How React Native’s development time and cost compare to native mobile app development </a></li>
                    <li><a href="#When-React-Native-can-speed-up-the-software-development-process:-">6.  When React Native can speed up the software development process</a></li>
                    <li><a href="#When-should-you-use-React-Native-over-progressive-web-apps-(PWA)?-">7.  When should you use React Native over progressive web apps (PWA)? </a></li>
                    <li><a href="#Best-practices-for-React-Native-mobile-app-development">8.  Best practices for React Native mobile app development</a></li>
                    <li><a href="#Advantages-of-React-Native-cross-platform-app-development-">9.  Advantages of React Native cross-platform app development </a></li>
                    <li><a href="#Disadvantages-of-React-Native-cross-platform-app-development-">10.  Disadvantages of React Native cross-platform app development </a></li>
                    <li><a href="#12-reasons-to-choose-React-Native-to-develop-mobile-apps-">11.  12 reasons to choose React Native to develop mobile apps </a></li>
                    <li><a href="#React-Native-case-study-">12.  React Native case study </a></li>
            </ol>
</div>


<h2 class="wp-block-heading" id="Mobile-app-development-projects-in-the-past-">Mobile app development projects in the past&nbsp;</h2>



<p>When the mobile revolution started, most mobile applications were implemented in programming languages designed for a specific platform. Android developers were using Java while iOS developers dealt with Objective C which later was replaced by Swift.&nbsp;&nbsp;</p>



<p>This approach made sense because the biggest challenge developers faced was performance. The apps were expected to run smoothly. Designers wanted to add some animations. And fast, at the time, mobile devices had less computing power than even the slowest ones we use nowadays.&nbsp;&nbsp;</p>



<h2 class="wp-block-heading" id="Why-React-Native-development-has-become-popular-">Why React Native development has become popular&nbsp;</h2>



<p>Later, with more powerful devices, and optimized technical solutions things started to change. Initially, cross-platform solutions were only suitable for simple, less demanding applications. However, today that is not an issue anymore. Mobile devices are powerful and cross-platform solutions went through an optimization process to be treated as “almost native”.&nbsp;</p>



<p><strong>Read also:</strong> <a href="https://nearshore-it.eu/technologies/cloud-native/">Cloud-native applications: what do you need to know?</a></p>



<h2 class="wp-block-heading" id="What-is-React-Native-app-development,-and-why-choose-it-for-your-development-process?-">What is React Native app development, and why choose it for your development process?&nbsp;</h2>



<p>React Native is one of the cross-platform solutions that developers can use to create apps for multiple platforms.&nbsp;&nbsp;</p>



<h3 class="wp-block-heading">The benefits of using React Native:&nbsp;</h3>



<ul class="wp-block-list">
<li>React Native code is partially just a React code that can be reused to create a web application.&nbsp;</li>



<li>React Native deals with platform UI implementations and allows us to focus on developing features with React primitives.&nbsp;</li>



<li>In React Native, developers can use libraries, and SDKs or write their native code.&nbsp;</li>



<li>React Native is based on React, a popular framework for web development. It means it might be easier to find good React developers who only need to learn the native part of development.&nbsp;</li>



<li>React Native performs much better nowadays. The core team introduced a new communication method between JavaScript and native elements: “Bridge” asynchronous communication has been replaced with JSI (JavaScript Interface) communication.</li>
</ul>



<h2 class="wp-block-heading" id="React-native-framework-vs.-native-app-development">React native framework vs. native app development: when to choose Swift or Kotlin for your app development project?&nbsp;</h2>



<p>Even though there are advantages to using React Native, some cons can be found as well. Due to their nature, cross-platform solutions cannot compete with native ones in some aspects.&nbsp;</p>



<h3 class="wp-block-heading">Potential React Native issues:&nbsp;</h3>



<ul class="wp-block-list">
<li>Near-native performance is still worse than native performance and some complex projects might need to be developed natively. Especially when they deal with many interactions.&nbsp;</li>



<li>Some parts of the app might still need to be developed using native programming languages. That means that either React developers need to learn how to do it, or some help from native developers might be needed from time to time.&nbsp;</li>



<li>React Native is a relatively new technology, therefore some modules might need to be built from scratch. It can slow down the software development if the development team is not aware of this limitation.&nbsp;</li>



<li>React Native was built by the Facebook team and has less support and backward compatibility than native technologies. It is used in many projects, so it won’t be abandoned in a day. Nonetheless, its popularity heavily depends on Facebook (Meta) and its use in their apps.</li>
</ul>



<h2 class="wp-block-heading" id="How-React-Native’s-development-time-and-cost-compare-to-native-mobile-app-development-">How React Native’s development time and cost compare to native mobile app development&nbsp;</h2>



<p>When we try to make a comparison between React Native and native app development, we should first focus on business requirements and company expectations. Some projects should be delivered as soon as possible to be later improved and extended. Other projects have already been developed as web applications and the goal is to port them to mobile applications. Those are probably the most common cases where React Native can come in handy.&nbsp;</p>



<h2 class="wp-block-heading" id="When-React-Native-can-speed-up-the-software-development-process:-">When React Native can speed up the software development process:&nbsp;</h2>



<ul class="wp-block-list">
<li>There is an existing web application built in React and many parts of the code can be reused. Only UI and native elements must be developed.&nbsp;</li>



<li>It is known from the very beginning that you need to serve your app on multiple platforms and app development process speed is the main factor.&nbsp;</li>



<li>Applications are not complex from the technological perspective and there probably won’t be a need to develop custom or native components.&nbsp;</li>



<li>The application is not expected to scale to a point where huge amounts of data could slow it and make it laggy.&nbsp;</li>



<li>The team in the company has a lot of React experience. It might be cheaper to allow them to learn native development than to hire dedicated teams for Android and iOS development.</li>
</ul>



<h2 class="wp-block-heading" id="When-should-you-use-React-Native-over-progressive-web-apps-(PWA)?-">When should you use React Native over progressive web apps (PWA)?&nbsp;</h2>



<p><a href="https://nearshore-it.eu/technologies/pwa-the-front-end-revolution/" target="_blank" rel="noreferrer noopener">PWA&#8217;s</a> are simply applications running in browsers that use web technologies such as HTML, CSS, and JavaScript. They can mimic the behavior of native applications to some extent. There are many cases in which PWA’s are sufficient for our needs. They can be saved on the home screen, and they can work offline, using push notifications and some device features.&nbsp;&nbsp;</p>



<p>React Native is a JavaScript-based framework that allows you to build mobile apps for iOS and Android. It has access to native rendering APIs and can take advantage of GPS, camera, and accelerometer. This allows developers to build applications that feel truly native and perform at a nearly native level.&nbsp;</p>



<h3 class="wp-block-heading">When should we choose React Native over PWA?&nbsp;</h3>



<p>PWA’s have limited access to certain hardware features of a device. This can impact the way the app functions. We should be aware of that and choose React Native when a specific device feature is a key factor to app success.</p>



<ul class="wp-block-list">
<li>Battery consumption is higher in PWA’s. It happens because they run in the browser which is less effective compared to native apps.</li>



<li>Performance is better in React Native apps because the have access to native rendering APIs.</li>



<li>React Native uses native UI components specific to each platform. That results in a seamless User Experience.</li>
</ul>



<p>Read also: <a href="https://nearshore-it.eu/technologies/angular-pwa/">Angular – Let’s create our own Progressive Web Application</a></p>



<h2 class="wp-block-heading" id="Best-practices-for-React-Native-mobile-app-development">Best practices for React Native mobile app development</h2>



<p>Most of the best practices for React mobile applications will be the same as in the case of React web app development best practices.</p>



<ul class="wp-block-list">
<li>Carefully plan code structure.</li>



<li>Keep your components small and reusable.</li>



<li>Extract reusable parts of logic to promote modular code.</li>



<li>Try not to overcomplicate state management. Sometimes local state is all you need.</li>



<li>Optimize performance where needed. Avoid unnecessary re-renders, and use optimized elements like FlatList.</li>



<li>Avoid inline styling, use shared styles.</li>



<li>Try to build dynamic components that can rely on server-side data. That might limit the number of app releases when some minor logic changes are introduced.</li>



<li>Don’t forget to test your code.</li>



<li>Use React Native’s built-in accessibility features</li>



<li>Introduce crash analytics to ensure your app is healthy and the number of crashes is low.<br>Log errors. That will help you with debugging issues and gathering analytics for future development.</li>
</ul>



<h2 class="wp-block-heading" id="Advantages-of-React-Native-cross-platform-app-development-">Advantages of React Native cross-platform app development&nbsp;</h2>



<p>There are business and technical advantages of using React Native. Most important are the following.&nbsp;</p>



<ul class="wp-block-list">
<li><a href="https://www.bacancytechnology.com/blog/why-use-react-native" data-type="link" data-id="https://www.bacancytechnology.com/blog/why-use-react-native" target="_blank" rel="noopener">React Native is gaining popularity. It holds around 13% of the market share among cross-platform mobile app development frameworks.</a></li>



<li>There are many experienced React developers who can switch to React Native development relatively easily.</li>



<li>React Native community is large.</li>



<li>React Native targets many platforms at once.</li>



<li>If there is a need for web application development with similar features, parts of the codebase can be shared between React and React Native.</li>



<li>React Native is fast because it uses native APIs. It gives a near-mobile experience in terms of application smoothness.</li>
</ul>



<h2 class="wp-block-heading" id="Disadvantages-of-React-Native-cross-platform-app-development-">Disadvantages of React Native cross-platform app development&nbsp;</h2>



<p>Just like every cross-platform solution built by a third party, React Native has its cons. The most important are:&nbsp;</p>



<ul class="wp-block-list">
<li>Relying on third-party solutions can be risky even if the company that created is large and if it has a large community.</li>



<li>Some custom modules are missing and others can be underdeveloped.</li>



<li>Upgrades are not always easy to apply.</li>



<li>Native developers might be needed from time to time to develop more sophisticated plugins written in native languages.</li>



<li>It is easy to write an app that won’t perform as expected. You need to pay extra attention to developing an optimized application.</li>



<li>Complex designs and interactions can run slowly in React Native.</li>



<li>Even a well-optimized application won’t reach the speed of a native application.</li>
</ul>



<h2 class="wp-block-heading" id="12-reasons-to-choose-React-Native-to-develop-mobile-apps-">12 reasons to choose React Native to develop mobile apps&nbsp;</h2>



<p>Let’s summarize it with a list of reasons that should encourage you to use React Native.&nbsp;</p>



<ol class="wp-block-list">
<li>It targets many platforms at once.</li>



<li>React Native can have a shared codebase with the React web application.</li>



<li>It has a large community.</li>



<li>React Native speed can be described as near-native.</li>



<li>The framework is gaining popularity in mobile development.</li>



<li>React Native uses native APIs and has access to specific device features.</li>



<li>React Native deals with platform UI implementations.</li>



<li>React Native developers are not limited to existing modules. They can write native modules.</li>



<li>The React community is big, and this might help to build a React Native team.</li>



<li>New features and improvements are often released.</li>



<li>Good developers&#8217; experience during the development process (e.g. hot reloading).</li>



<li>React Native was developed by Facebook (Meta) and it is used in most of their mobile applications. Even though it’s a third-party solution, it might be considered a relatively safe one to use.</li>
</ol>



<h2 class="wp-block-heading" id="React-Native-case-study-">React Native case study&nbsp;</h2>



<p>Our client is a company in the heavy industry area. It is a European leader in laser metal processing equipment. End customers who buy the machines get a complete solution bundled with a control system for operators and service technicians.&nbsp;<br>&nbsp;<br>They needed a desktop application that would serve as a control panel to monitor cutting processes in the factory. The initial decision was to use React and Electron to build Windows desktop applications. During technical conversations the team decided to switch to React Native to leave a gate for native app development or performance improvements.&nbsp;&nbsp;&nbsp;</p>



<p>This was due to the risk that the application displayed in the browser on slower tablets might not be efficient enough to correctly preview the live cutting process.&nbsp;&nbsp;</p>



<p>Additionally, the client didn’t make a final decision whether the app should be made available in App Store and Google Play. In the end, the application was made available on the Windows display but the use of React Native creates opportunities for future mobile development.&nbsp;</p>



<h2 class="wp-block-heading">Summary&nbsp;</h2>



<p>This development framework allows developers to create cross-platform mobile apps that function seamlessly on both iOS and Android. By leveraging JavaScript and React, a React Native app development company can build an app using native features, ensuring a smoother User Experience. The react native development services enable the integration of native libraries, allowing for the use of native device capabilities throughout the development process. A native app development company can bring significant benefits resulting in faster development. If you are interested in React Native development services, contact us to see how your project can speed up.&nbsp;&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://nearshore-it.eu/articles/react-native-app-development-guide-for-mobile-app-development/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Top DevSecOps Tools: Ensuring Sensitive Data Security &#038; Compliance               </title>
		<link>https://nearshore-it.eu/articles/devops-devsecops-safeguard-sensitive-data-with-right-tools/</link>
					<comments>https://nearshore-it.eu/articles/devops-devsecops-safeguard-sensitive-data-with-right-tools/#respond</comments>
		
		<dc:creator><![CDATA[Amadeusz Kryze]]></dc:creator>
		<pubDate>Wed, 09 Oct 2024 03:30:05 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[Technologies]]></category>
		<category><![CDATA[Cloud engineering]]></category>
		<category><![CDATA[DevOps]]></category>
		<guid isPermaLink="false">https://nearshore-it.eu/?p=33399</guid>

					<description><![CDATA[Learn about DevOps vs DevSecOps and integrate security into the software development lifecycle pipeline using DevSecOps security practices. ]]></description>
										<content:encoded><![CDATA[
<p>In today&#8217;s fast-paced software development landscape, integrating security into the development process is essential for protecting sensitive data and assuring compliance. The DevSecOps model emphasizes the collaboration between development and operations teams to ensure that security practices are embedded throughout the software development lifecycle. By fostering a strong partnership between the DevSecOps team, security team, and operations team, organizations can effectively address security vulnerabilities and implement security controls early in the DevOps process. Read about DevSecOps and security tools for greater security of your IT project. &nbsp;</p>



<div class="table-of-contents">
    <p class="title"></p>
    <ol>
                    <li><a href="#What-is-DevSecOps?">1.  What is DevSecOps? </a></li>
                    <li><a href="#Benefits-of-DevSecOps-">2.  Benefits of DevSecOps </a></li>
                    <li><a href="#DevSecOps-vs-DevOps-">3.  DevSecOps vs DevOps </a></li>
                    <li><a href="#DevSecOps-for-enhanced-application-security-">4.  DevSecOps for enhanced application security </a></li>
                    <li><a href="#DevSecOps-culture">5.  DevSecOps culture </a></li>
                    <li><a href="#Continuous-Integration">6.  Continuous Integration </a></li>
                    <li><a href="#Continuous-Delivery">7.  Continuous Delivery </a></li>
                    <li><a href="#Continuous-Security">8.  Continuous Security </a></li>
                    <li><a href="#Communication-and-collaboration-">9.  Communication and collaboration </a></li>
                    <li><a href="#DevSecOps-best-practices:-shift-security-left">10.  DevSecOps best practices: shift security left </a></li>
                    <li><a href="#Implementing-DevSecOps-and-automating-security-best-practices-">11.  Implementing DevSecOps and automating security best practices </a></li>
                    <li><a href="#Recommended-DevSecOps-tools">12.  Recommended DevSecOps tools </a></li>
                    <li><a href="#Successful-DevSecOps-and-DevOps-Integration-">13.  Successful DevSecOps and DevOps Integration </a></li>
            </ol>
</div>


<h2 class="wp-block-heading" id="What-is-DevSecOps?">What is DevSecOps?&nbsp;</h2>



<p>In the realm of modern craft, there arose a practice known among the learned as DevSecOps, a union of three great disciplines: development, security, and operations. It was not unlike the forging of a mighty alliance, wherein each realm must contribute its strength, lest their endeavors fall prey to the shadows of vulnerability.&nbsp;</p>



<p>In days past, many development teams would wait until the final hour, when the code was near release, before calling upon the wardens of security. Yet this path was fraught with peril, for to uncover weaknesses so late would often cost dear in time, gold, and effort. But the wisest of realms soon saw another way.&nbsp;</p>



<h2 class="wp-block-heading" id="Benefits-of-DevSecOps-">Benefits of DevSecOps&nbsp;</h2>



<p>By weaving security into every phase of the software development lifecycle – whether in the laying of the first line of code or the final shaping of the product – teams could safeguard their work from the outset and prevent a number of security issues. Through deep collaboration and the magic of automation, they crafted a system where all shared the burden of protection. Thus, no longer was security the task of the few, but of all who toiled together, forging a shield that would hold fast even in the face of the darkest threats.&nbsp;</p>



<p>In this way, DevSecOps was not merely a method, but a way of ensuring that no creation would leave the forges unguarded against the unseen dangers.&nbsp;&nbsp;</p>



<h2 class="wp-block-heading" id="DevSecOps-vs-DevOps-">DevSecOps vs DevOps&nbsp;</h2>



<p>In terms of traditional software development, the crafting of great projects was long governed by the old ways, where the work was divided into phases – like the seasons of the year – each flowing one after the other. &nbsp;</p>



<p>There would come a time for planning, then design, and only after would the labor of development begin, followed by the testing and binding of all parts into a whole. Yet, this process, while orderly, was as slow as the march of the seasons themselves. In an age where customers&#8217; desires are ever-shifting, such a pace no longer sufficed.&nbsp;</p>



<p>Worse still, security specialists were oft called upon at the very end, when the work was near complete, to cast their protections over the product. Alas, this lateness was fraught with danger, for vulnerabilities uncovered at such a time could unravel much of the work that had come before.&nbsp;</p>



<p>Thus, many turned to a new way: the DevOps model. Here, rather than waiting for the long seasons to pass, the DevOps teams delivered smaller, yet finely crafted parcels of work – each one polished and ready – rather than undertaking vast projects that stretched on for years. &nbsp;</p>



<p>In this way, the teams of development and operations joined forces, testing and refining their work as they went. By using the tools of automation and forging standardized processes, they moved with great speed, yet kept the quality of their software product intact.&nbsp;</p>



<h2 class="wp-block-heading" id="DevSecOps-for-enhanced-application-security-">DevSecOps for enhanced application security&nbsp;</h2>



<p>But in time, the guilds realized there was a further step to be taken. They sought not just swiftness, but security woven into every step of their journey. And so was born DevSecOps, where security was not an afterthought but a companion to every stage of the craft. From the very first whispers of planning, security was present, and in the fires of development, it was tested and shaped alongside the code. The burden of security threat protection no longer lay on the shoulders of a few, but on the entire fellowship of creators. This new way became known to some as ‘shift left security&#8217;, for it brought the guardians of protection to the forefront of the process, ensuring that no ill would befall their work from the very start of the journey to its end.&nbsp;</p>



<p><strong>Read also:</strong> </p>



<ul class="wp-block-list">
<li><a href="https://nearshore-it.eu/technologies/azure-cost-management-101-how-to-optimize-cloud-costs/">Azure Cost Management 101</a></li>



<li><a href="https://nearshore-it.eu/technologies/cloud-native/">Cloud-native applications: what do you need to know?</a></li>
</ul>



<h3 class="wp-block-heading">Why is DevSecOps important?&nbsp;</h3>



<p>Many foes are seeking to gain entry and plunder company&#8217;s most valuable treasures – its data and assets. One of the most common practices they can employ, is the exploitation of weaknesses, hidden deep within the folds of the organization&#8217;s own software. These vulnerabilities, if left unchecked, are like cracks in the foundation of a mighty fortress, and through them, adversaries can slip in unnoticed, wreaking havoc from within.&nbsp;</p>



<p>Such breaches can be devastating. They consume both time and money, and in their wake, they leave scars upon a company&#8217;s name, causing trust to wither among clients and partners alike.&nbsp;</p>



<h2 class="wp-block-heading" id="DevSecOps-culture">DevSecOps culture&nbsp;</h2>



<p>But there is hope within DevSecOps. By following this path, the guilds of development, security, and operations stand together, ever watchful. The framework is like a vigilant sentry, reducing the risk of sending forth software riddled with flaws, misconfigurations, or vulnerabilities. Through constant vigilance and the weaving of security into every phase of creation, they close the gates through which bad actors might pass, fortifying their code so that it may stand strong against the onslaught of those who would seek to exploit it. Thus, the company stands firm, its reputation unshaken, its defenses prepared for the battles yet to come.&nbsp;</p>



<h2 class="wp-block-heading" id="Continuous-Integration">Continuous Integration&nbsp;</h2>



<p>In the world of software development, there once was a time when the labor of many artisans was brought together only at the final hour, when all the pieces were near completion. But it was often in those moments, at the very end, that flaws were revealed, and the seams of their work would unravel, leaving them with a tangle of issues too great to swiftly resolve.&nbsp;</p>



<p>But a new method arose, known as Continuous Integration, a practice where the builders of code did not wait for the end to unite their works. Instead, they would commit their efforts to a central repository many times throughout the day, like artisans returning to the hearth to meld their creations together piece by piece. Each time they did so, their work was automatically tested and integrated, ensuring that all parts fit together in harmony.&nbsp;</p>



<p>By catching integration issues and bugs early, long before the final forge was set aflame, this approach saved the guilds from the chaos of last-minute discoveries.&nbsp;&nbsp;</p>



<h2 class="wp-block-heading" id="Continuous-Delivery">Continuous Delivery&nbsp;</h2>



<p>Building upon the foundation of Continuous Integration, a further practice emerged, known as Continuous Delivery. Where the former ensured that code was swiftly integrated and tested, Continuous Delivery took this one step further, automating the journey from the builder&#8217;s hands to a staging environment, where code would be tested.&nbsp;</p>



<p>Once the code reached this staging ground, it did not rest. The system immediately set to work, not only with unit testing, but with a series of trials to ensure that all aspects of the creation were sound. The user interface was inspected to ensure it responded as intended, the seams of integration were examined for any weaknesses, and the APIs were tested to confirm they communicated well between systems. The code was also tested under the weight of simulated traffic, to see if it could bear the burden of the many users it would one day serve.&nbsp;</p>



<p>The aim of Continuous Delivery is simple: to consistently deliver code that was not only complete, but of true value.&nbsp;&nbsp;</p>



<h2 class="wp-block-heading" id="Continuous-Security">Continuous Security&nbsp;</h2>



<p>DevSecOps practice, one of the most vital elements is the weaving of security into every step of the software&#8217;s journey, from the first step of design to the final unveiling. No longer could security be treated as an afterthought, called upon only when the work was nearly complete. Instead, it became a core part of the process, guiding the work like an unseen but ever-present hand.&nbsp;</p>



<p>From the earliest stages, when the blueprints of the software were still taking shape, the guilds would engage in threat modeling, a process of foresight where they sought to uncover any potential dangers. They did not wait for the enemy to strike, but anticipated its moves, fortifying their code against unseen attacks before they could take root.&nbsp;</p>



<p>As the work progressed, automated security testing was woven into every stage of the DevOps workflow. Automation played a key role, testing the code continuously, from the developers&#8217; own environments to the farthest reaches of the deployment pipeline. No phase was left unguarded; no section of code went untested.&nbsp;</p>



<p>Through this constant vigilance – testing early and testing often – the teams were able to find and mend weaknesses swiftly. And so, they delivered their software products with confidence, knowing that each line of code was protected. With DevSecOps, the road to production became one of fewer pitfalls and greater security, allowing organizations to deliver secured software swiftly and with minimal issues.  &nbsp;</p>



<h2 class="wp-block-heading" id="Communication-and-collaboration-">Communication and collaboration&nbsp;</h2>



<p>In the practice of DevSecOps, the strength of the software does not lie solely in the application security tools or the processes but in the fellowship of those who undertake the journey together. It is a path that demands more than mere skill; it calls for deep collaboration and unity of purpose among individuals and teams.&nbsp;</p>



<p>When the developers commit their work to the central repository in the practice of Continuous Integration, conflicts in code inevitably arise. But it is through collaboration – where minds come together, and voices are heard – that these challenges are resolved. Developers, security experts, and operations alike must work side by side, swiftly addressing these conflicts so the flow of progress is not hindered.&nbsp;</p>



<p>But beyond the technical work, there is a greater need – communication. Teams must speak openly and often, sharing their visions and aligning around the same goals. Without this shared understanding, the work would drift in many directions, and the efforts of one might undo the labors of another. In DevSecOps approach, every hand contributes to the same creation, and every voice is heard in the great chorus.&nbsp;&nbsp;</p>



<h2 class="wp-block-heading" id="DevSecOps-best-practices:-shift-security-left">DevSecOps best practices: shift security left&nbsp;</h2>



<h3 class="wp-block-heading">Planning and development&nbsp;</h3>



<p>Introducing security policies and addressing security risks early into the rhythm of development sprints is akin to fortifying the foundations of a great structure before the first stones are laid. By addressing vulnerabilities in the early stages, teams not only reduce the risk of future threats but also save valuable time, for it is far easier to mend potential flaws before the code has been built and added to the greater whole. &nbsp;</p>



<p>At the start of every sprint, during planning and development, threat modeling becomes a crucial tool, a map guiding the teams to uncover and mitigate potential dangers long before they can occur. By identifying these threats early, security is no longer something that is added at the end, but something that is woven into the very fabric of the application from the outset.&nbsp;</p>



<p>To ensure security, before the code is committed to the shared repository, automated checks are employed, acting as vigilant sentries. Integrated development environment (IDE) security plug-ins provide developers with immediate feedback, warning them if their code harbors a potential risk. These automated checks catch flaws early, empowering developers to address them before they can take root.&nbsp;</p>



<p>As the code continues its journey, passing from one set of hands to the next, the software is further refined during the code review. Here, someone with the knowledge of security steps forward, offering their insight and making recommendations to bolster the work. This expertise ensures that, by the time the code is ready to move on, it is fortified against threats, functional, and valuable. &nbsp;&nbsp;</p>



<h3 class="wp-block-heading">Code commit&nbsp;</h3>



<p>A cornerstone of the DevSecOps process lies in the practice of continuous integration, a discipline that ensures code is not simply created in isolation but is constantly woven into the central repository, allowing teams to catch issues before they can fester. Developers, like diligent artisans, commit their code several times a day, ensuring that each piece of work seamlessly fits into the greater whole. This frequent integration allows potential conflicts or errors to be discovered early, long before they can threaten the stability of the project.&nbsp;</p>



<p>However, to truly safeguard the craft, it is vital to introduce security into this phase. Automated security checks must stand guard alongside the integration process. These include scanning third-party libraries and dependencies – those external pieces of code that, while useful, may harbor unseen vulnerabilities. Unit testing ensures the smallest parts of the code function as they should, while static application security testing (SAST) reviews the code for weaknesses, searching for hidden threats that might otherwise go unnoticed.&nbsp;</p>



<p>But safeguarding the code itself is not enough. The continuous integration (CI) and continuous delivery (CD) infrastructure, which carries this code from creation to deployment, must also be protected. Role-based access controls (RBAC) play a crucial role in this defense, limiting access to the system based on the specific roles of individuals. By ensuring that only those with the right permissions can interact with the CI/CD infrastructure, teams protect it from attackers who might seek to run malicious code or steal credentials.&nbsp;</p>



<p>In this way, the continuous integration process becomes not only a means to unite code swiftly and efficiently but also a stronghold against external threats. Security is built into every layer, from the automated checks that scan the code to the protections guarding the very systems that bring the work to life.&nbsp;</p>



<h3 class="wp-block-heading">Building and testing&nbsp;</h3>



<p>In DevSecOps, where vigilance is paramount, the test environment serves as a proving ground for code before it ventures into production. Here, automated security scripts are seeking out potential threats that may have slipped past earlier defenses. By running these tests in a controlled environment, teams can uncover hidden vulnerabilities and ensure their work remains strong and secure. &nbsp;<br>&nbsp;<br><strong>DAST</strong>&nbsp;</p>



<p>Among the many tests that can be employed during this phase is Dynamic Application Security Testing (DAST), which simulates real-world attacks against the running application. Unlike static tests, DAST operates while the application is live, identifying vulnerabilities such as: cross-site scripting, SQL injection, and other dangerous flaws.&nbsp;</p>



<p><strong>Infrastructure scanning&nbsp;</strong>&nbsp;</p>



<p>Infrastructure scanning follows, casting its gaze across the entire architecture, from servers to networks, searching for weaknesses in the foundational layers that might allow an attacker entry. For those employing containers as part of their deployment strategy, container scanning ensures that these lightweight units of software do not harbor vulnerabilities in their dependencies or configurations, fortifying them before they are deployed.&nbsp;</p>



<p><strong>Cloud configuration validation</strong>&nbsp;</p>



<p>In the age of the cloud, where infrastructure is often abstracted and spread across vast digital environments, cloud configuration validation becomes crucial. By checking the configurations of cloud resources, teams can ensure that no misconfigurations – such as excessive permissions or insecure access points – expose their environments to unnecessary risk.&nbsp;</p>



<p>Lastly, security acceptance testing ensures that all necessary security requirements are met. This step serves as the final safeguard, confirming that the code and infrastructure are not only functional but fortified against known threats and risks minimized.&nbsp;</p>



<h3 class="wp-block-heading">Production&nbsp;</h3>



<p>Once the application has been deployed to production and stands in the real world, some organizations take a proactive step to uncover any remaining weaknesses by engaging in penetration testing. This practice is more than just another test – it is a deliberate attempt to breach the application as an attacker might, with real-world tactics and determination.&nbsp;</p>



<p>In penetration testing, skilled individuals, often referred to as ethical hackers, adopt the mindset of a potential adversary. They probe the application for weaknesses, using the same strategies and tools a malicious actor might employ. These tests can range from exploiting known vulnerabilities in third-party components to more sophisticated attacks aimed at bypassing the application&#8217;s defenses.&nbsp;</p>



<p>The goal is simple: to expose any weaknesses that might have slipped through the earlier layers of security testing, those that could potentially be exploited. By simulating real-world attack scenarios, penetration testing reveals how the application holds up under direct assault, whether it&#8217;s vulnerable to unauthorized access, data breaches, or other forms of compromise.&nbsp;</p>



<p>This phase is crucial for understanding not just theoretical vulnerabilities but how the system behaves in a live environment, where the s 1takes are highest. Penetration testing provides organizations with invaluable insights into the robustness of their defenses, allowing them to patch any remaining weaknesses before an actual attacker can exploit them. Thus, it becomes the final line of preparation, ensuring that the application is truly ready to stand firm against threats in the production environment.&nbsp;</p>



<h3 class="wp-block-heading">Operation&nbsp;</h3>



<p>Even with the most robust DevSecOps process, no system is entirely resistant to evolving threats. This is why continuous monitoring of applications becomes essential once they are deployed. By maintaining constant vigilance, organizations can quickly detect, respond to, and mitigate any new vulnerabilities, unforeseen threats and risks before they cause significant harm.&nbsp;</p>



<p>Monitoring tools look for signs of irregularities, scanning for vulnerabilities, unauthorized access attempts, or other suspicious activities that might signal a breach or weakness. These tools provide real-time insights, alerting teams to potential issues the moment they arise.&nbsp;&nbsp;</p>



<p>To further strengthen this defense, analytics data plays a key role. By analyzing patterns and trends in security events, teams can evaluate the effectiveness of their security posture. This data offers valuable insights into how well current defenses are performing, allowing organizations to track whether they are improving over time or if new vulnerabilities are emerging. It also highlights areas that may require optimization, guiding future efforts in reinforcing the system.&nbsp;&nbsp;</p>



<p>Bear in mind, however, that in the world of security, the battle is never truly over. </p>



<p><strong>Read also:</strong> </p>



<ul class="wp-block-list">
<li><a href="https://nearshore-it.eu/technologies/cloud-agnostic-applications-pros-and-cons-of-cloud-agnostic-strategies/">Cloud-Agnostic Applications: Pros and Cons</a></li>



<li><a href="https://nearshore-it.eu/best-practices/horizontal-vs-vertical-scaling/">Horizontal vs Vertical Scaling: A 101 Guide</a></li>



<li><a href="https://nearshore-it.eu/technologies/cloud-computing-trends-for-2023-2025/">How to gain a cloud advantage? Here are 7 cloud computing trends for 2023 – 2025 </a></li>
</ul>



<h2 class="wp-block-heading" id="Implementing-DevSecOps-and-automating-security-best-practices-">Implementing DevSecOps and automating security best practices&nbsp;</h2>



<p>I bid you to consider these tools as you embark upon the journey of DevSecOps automation within your organization. Some are like fruit hanging low upon the bough, easily gathered and swiftly put to use, while others may lie deeper within the forest, requiring more effort to attain. Yet, though the path to them may be more difficult, the rewards they yield are well worth the quest.&nbsp;</p>


</style><div class="promotion-box promotion-box--image-left "><div class="tiles latest-news-once"><div class="tile"><div class="tile-image"><img decoding="async" src="https://nearshore-it.eu/wp-content/uploads/2024/07/Marek-Dobkowski-1-bezloczkowy-kwadrat.jpg" alt="Marek Dobkowski 1 bezloczkowy kwadrat" title="Top DevSecOps Tools: Ensuring Sensitive Data Security &amp; Compliance                12"></div><div class="tile-content"><p class="entry-title client-name promotion-box__headline2">Want to gain cost and competitive advantage in the cloud?</p>
<p class="promotion-box__description2">Consult with <strong>Marek Dobkowski</strong>, Head of Microsoft Practice, for expert guidance.</p>
<a class="btn btn-primary booking" href="https://outlook.office365.com/book/BookameetingwithMarek1@gfi.fr/" target="_blank" rel="noopener">Let's talk!</a>





</div></div></div></div>



<h2 class="wp-block-heading" id="Recommended-DevSecOps-tools">Recommended DevSecOps tools&nbsp;</h2>



<h3 class="wp-block-heading">Trivvy&nbsp;</h3>



<p>Trivy has risen to prominence as a trusted solution among open-source security scanners, valued for its reliability, swiftness, and simplicity. It offers a far-reaching array of security checks, making it a vital companion for those seeking to fortify their DevSecOps practices. For teams looking to secure their realms of code and infrastructure, Trivy stands as a steadfast tool, ever vigilant and ready to ensure the safety of their creations.&nbsp;</p>



<p><strong>Targets (what Trivy can scan):&nbsp;</strong></p>



<ul class="wp-block-list">
<li>Container Image&nbsp;</li>



<li>Filesystem&nbsp;</li>



<li>Git Repository (remote)&nbsp;</li>



<li>Virtual Machine Image&nbsp;</li>



<li>Kubernetes&nbsp;</li>



<li>AWS&nbsp;</li>
</ul>



<p><strong>Scanners (what Trivy can find there):&nbsp;</strong></p>



<ul class="wp-block-list">
<li>OS packages and software dependencies in use (SBOM)&nbsp;</li>



<li>Known vulnerabilities (CVEs)&nbsp;</li>



<li>IaC issues and misconfigurations&nbsp;</li>



<li>Sensitive information and secrets&nbsp;</li>



<li>Software licenses&nbsp;</li>
</ul>



<h3 class="wp-block-heading">Trufflehog&nbsp;</h3>



<p>TruffleHog tool is a masterful seeker of concealed passwords and keys. Like a skilled ranger, TruffleHog ventures where few dare to tread, unearthing the hidden secrets that, if left unchecked, could spell doom for the unwary.&nbsp;</p>



<p><strong>How TruffleHog wields its power:</strong>&nbsp;</p>



<ul class="wp-block-list">
<li><strong>Detect: </strong>TruffleHog scours the history of all platforms, much like a wise lorekeeper sifting through ancient scrolls, seeking out long-forgotten secrets. Yet it looks not only in the obvious places but also in the whispers of comments, the hidden folds of Docker images, and other obscure corners.&nbsp;</li>



<li><strong>Analyze:</strong> TruffleHog reveals the true nature of the secrets it uncovers, discerning what resources and permissions are tied to API keys and other tokens. Remarkably, it achieves this without ever needing to peer into the provider&#8217;s vault.&nbsp;</li>



<li><strong>Prevent:</strong> To stop the ill-fated inclusion of secrets from the very beginning, TruffleHog sets traps at key points, using pre-commit and pre-receive hooks. These safeguards ensure that no sensitive data is unintentionally leaked before it ever leaves the developer&#8217;s hand.&nbsp;</li>



<li><strong>Remediate:</strong> TruffleHog continues to track the fate of discovered keys and secrets. It verifies that remediation is complete, sending reminders on preferred platforms and providing knowledge to users on how to properly manage and secure the keys that were once at risk.&nbsp;</li>
</ul>



<p><strong>Why TruffleHog is a worthy ally:</strong>&nbsp;</p>



<ul class="wp-block-list">
<li><strong>Comprehensive multi-branch analysis: </strong>TruffleHog does not simply guard the main road but patrols every path. It scans all branches, not just the primary one, ensuring the same level of vigilance across the entire project. This is especially valuable in larger domains where many branches are being worked on in tandem.&nbsp;</li>



<li><strong>Credential verification:</strong> TruffleHog employs programmatic verification, testing each credential using its own protocol or API. This removes the false trails, ensuring that only real threats are brought to light.&nbsp;</li>



<li><strong>Open-source fellowship: </strong>As with any great alliance, TruffleHog thrives through the support of an open-source community. Many dedicated hands join together to audit and improve the tool, ensuring that no single voice carries undue weight. The community checks and balances each other&#8217;s work, so that trust is shared among all.&nbsp;<br>&nbsp;</li>
</ul>



<h3 class="wp-block-heading">Snyk&nbsp;</h3>



<p>This platform guards the entirety of an application&#8217;s journey – from the very first lines of code to its deployment in the cloud. Through its guidance, developers may discover and mend vulnerabilities before they are ever loosed upon the world.&nbsp;</p>



<p><strong>The powers of Snyk:</strong>&nbsp;</p>



<ul class="wp-block-list">
<li><strong>Snyk open source: </strong>Snyk scours open-source libraries and dependencies, seeking out vulnerabilities. And when such flaws are found, it does not merely warn the developer but offers a swift means to mend them, restoring the strength of the code.&nbsp;</li>



<li><strong>Snyk code:</strong> As code is written, Snyk watches in real-time, finding and fixing vulnerabilities within the very heart of the application. It is like a companion at the developer&#8217;s side, ever watchful and ready to lend its aid.&nbsp;&nbsp;</li>



<li><strong>Snyk container:</strong> In the context of containers and Kubernetes, where applications are housed, Snyk&#8217;s gaze does not falter. It delves into container images, finding and repairing potentially harmful vulnerabilities.&nbsp;&nbsp;</li>



<li><strong>Snyk Infrastructure as Code: </strong>With great foresight, Snyk peers into the blueprints of infrastructure itself, examining the configurations of Terraform and Kubernetes code. Should it find any insecurity in the very foundation, it offers swift guidance on how to rectify these flaws, ensuring that the structure remains strong and secure.&nbsp;</li>
</ul>



<h3 class="wp-block-heading">Pre-commit&nbsp;</h3>



<p>&nbsp;Pre-commit is a powerful system for managing and maintaining pre-commit hooks across many programming languages. Pre-commit ensures that no errant detail is left unchecked before the code is sent for review.&nbsp;</p>



<p>In the world of Git, hook scripts act as a safeguard, catching simple errors before they reach the eyes of a reviewer. Whenever a developer commits their work, these hooks spring into action, pointing out issues such as missing semicolons, trailing whitespace, or forgotten debug statements. By addressing these small matters early, Pre-commit allows the reviewer to focus on the grand architecture of the changes, rather than wasting time on trivial style errors.&nbsp;</p>



<h3 class="wp-block-heading">Wazuh&nbsp;</h3>



<p>Free and open to all, Wazuh is skilled in the arts of threat prevention, detection, and response. It is a protector capable of defending the realms of on-premises fortresses, virtualized strongholds, containerized ships, and vast cloud kingdoms alike.&nbsp;</p>



<p>The strength of Wazuh lies in two parts: its endpoint security agents, which are deployed like watchful sentinels to the systems they protect, and its management server, a wise and ever-alert overseer.&nbsp;</p>



<p>The agents gather knowledge and data from the systems they monitor, and the management server collects, analyzes, and interprets this information, ever vigilant for signs of danger.&nbsp;</p>



<p>Wazuh, whem integrated with Elastic Stack, offers seamless navigation through security alerts, enhancing visibility and threat detection. By combining it with SIEM and XDR, you can gain protection for IT assets, responding to potential security dangers. &nbsp;<br>&nbsp;<br><strong>Use-cases:</strong>&nbsp;</p>



<ul class="wp-block-list">
<li>Configuration assessment&nbsp;</li>



<li>Malware detection&nbsp;</li>



<li>File integrity monitoring&nbsp;</li>



<li>Threat hunting&nbsp;</li>



<li>Log data analysis&nbsp;</li>



<li>Vulnerability detection&nbsp;</li>



<li>Incident response&nbsp;</li>



<li>Regulatory compliance&nbsp;</li>



<li>IT hygiene&nbsp;</li>



<li>Containers security&nbsp;</li>



<li>Posture management&nbsp;</li>



<li>Workload protection&nbsp;<br>&nbsp;</li>
</ul>



<h2 class="wp-block-heading" id="Successful-DevSecOps-and-DevOps-Integration-">Successful DevSecOps and DevOps Integration&nbsp;</h2>



<p>The integration should be as natural as the turning of the seasons, an organic and seamless process that unfolds with time. It is not a single task to be completed and forgotten, but a continuous journey.   Though at times it may call for a shift in the very culture of the organization, such change is not forced, but arises naturally&nbsp;</p>



<p>And as in all great works of creation, what we forge must be shaped by the needs of people. The processes we follow and the tools we wield must be chosen wisely, fitting the unique contours of our organization Only then can the integration thrive, as both DevOps and DevSecOps become not separate disciplines, but part of the same living tapestry, woven together in purpose and vision.&nbsp;</p>


</style><div class="promotion-box promotion-box--image-left promotion-box--full-width-without-image"><div class="tiles latest-news-once"><div class="tile"><div class="tile-content"><p class="promotion-box__description2"><strong>Consult your project directly with a specialist</strong></p>
<a class="btn btn-primary booking" href="https://outlook.office365.com/book/BookameetingwithMarek1@gfi.fr/" target="_blank" rel="noopener">Book a meeting</a></div></div></div></div>



<p>Read also:</p>



<ul class="wp-block-list">
<li><a href="https://nearshore-it.eu/articles/devops-monitoring-systems/">DevOps monitoring systems</a></li>



<li><a href="https://nearshore-it.eu/technologies/azure-durable-function-in-serverless-programming/">Azure Durable Functions – the extension to Azure Functions</a></li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://nearshore-it.eu/articles/devops-devsecops-safeguard-sensitive-data-with-right-tools/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG</title>
		<link>https://nearshore-it.eu/articles/create-ai-chat-with-semantic-kernel/</link>
					<comments>https://nearshore-it.eu/articles/create-ai-chat-with-semantic-kernel/#respond</comments>
		
		<dc:creator><![CDATA[Marek Dobkowski]]></dc:creator>
		<pubDate>Fri, 30 Aug 2024 10:25:38 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[Technologies]]></category>
		<category><![CDATA[Cloud engineering]]></category>
		<guid isPermaLink="false">https://nearshore-it.eu/?p=28857</guid>

					<description><![CDATA[Discover how to create AI chat apps with Semantic Kernel by Microsoft. Learn to build agents using .NET and integrate large language models effortlessly]]></description>
										<content:encoded><![CDATA[
<p>Over the last year, Generative AI has become a popular tool for creating various forms of content, including text, images, and audio. Many developers are now exploring how to incorporate these systems into their applications to benefit their users.</p>



<p>Despite the rapid advancement of technology and the constant release of new models and SDKs, it can be difficult for developers to know where to begin. While there are many polished end-to-end sample applications available for .NET developers to use as a reference, some may prefer to build their applications incrementally, starting with the basics and gradually adding more advanced features.</p>



<div class="table-of-contents">
    <p class="title"></p>
    <ol>
                    <li><a href="#Building-a-console-based-.NET-chat-app-with-solutions-like-Semantic-Kernel">1.  Building a console-based .NET chat app with solutions like Semantic Kernel</a></li>
                    <li><a href="#How-to-get-started-with-Semantic-Kernel-SDK.-Learn-how-to-use-it">2.  How to get started with Semantic Kernel SDK. Learn how to use it</a></li>
                    <li><a href="#Leveraging-Semantic-functions:-reusable-prompts,-dynamic-input-handling,-and-plugins">3.  Leveraging Semantic functions: reusable prompts, dynamic input handling, and plugins</a></li>
                    <li><a href="#Does-LLM-have-memory?-How-to-use-Semantic-Kernel-to-overcome-statelessness-in-chat-agents">4.  Does LLM have memory? How to use Semantic Kernel to overcome statelessness in chat agents</a></li>
                    <li><a href="#Does-LLM-know-everything?-Connectors-and-RAG-with-Semantic-plugins-and-native-functions">5.  Does LLM know everything? Connectors and RAG with Semantic plugins and native functions</a></li>
                    <li><a href="#Storing-memories">6.  Storing memories</a></li>
                    <li><a href="#Enhancing-your-Semantic-integration:-use-the-Semantic-best-practices-and-considerations">7.  Summary</a></li>
            </ol>
</div>


<h2 class="wp-block-heading" id="Building-a-console-based-.NET-chat-app-with-solutions-like-Semantic-Kernel">Building a console-based .NET chat app with solutions like Semantic Kernel</h2>



<p>This post aims to guide developers in building a simple console-based .NET chat application from scratch, with minimal dependencies and fuss. The ultimate goal is to create an application that can answer questions based on both the data used to train the model and additional data provided dynamically. Each code sample provided in this post is a complete application, allowing developers to easily copy, paste, and run the code, experiment with it, and then incorporate it into their own applications for further refinement and customization.</p>



<h2 class="wp-block-heading" id="How-to-get-started-with-Semantic-Kernel-SDK.-Learn-how-to-use-it">How to get started with Semantic Kernel SDK. Learn how to use it</h2>



<p>To begin, make sure you have .NET 8 installed, and create a simple console app.dotnet </p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">new console -o chat-sample-app-01 --use-program-main 

cd chat-sample-app-01</pre>



<p>This creates a new directory chat-sample-app-01 and populates it with two files: chat-sample-app-01.csproj and Program.cs. We then need to bring in one NuGet package: Microsoft.SemanticKernel.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">dotnet add package Microsoft.SemanticKernel</pre>



<p><a href="https://learn.microsoft.com/en-us/semantic-kernel/overview/" target="_blank" rel="noopener">Find out more at Microsoft Learn</a></p>



<h3 class="wp-block-heading">Planning, orchestration, and multiple plugins</h3>



<p>Instead of referencing specific AI-related packages such as Azure.AI.OpenAI, I have opted for an open source <a href="https://learn.microsoft.com/en-us/semantic-kernel/overview/" target="_blank" rel="noopener">Semantic Kernel</a> kit to streamline various interactions and easily switch between different implementations for faster experimentation. Semantic Kernel offers a collection of libraries that simplify working with Large Language Models (LLMs) by providing abstractions for various AI concepts, allowing for the easy substitution of different implementations. It also includes many concrete implementations of these abstractions, wrapping numerous other SDKs, and offers support for planning, orchestration, and multiple plugins. This post will explore various aspects of Semantic Kernel, but my primary focus is on its abstractions.</p>



<p>While I have tried to keep dependencies to a minimum for the purpose of this article, there is one more I cannot avoid: you need access to an LLM. The easiest way to get access is via either OpenAI or Azure OpenAI. For this post, I am using Azure OpenAI. You will need three pieces of information for the remainder of the post:</p>



<ul class="wp-block-list">
<li>Your API key and endpoint provided to you in the Azure portal</li>



<li>A chat model, or to be more precise, the deployment name of your model. I use GPT-4-32k (0613), which as of this writing has a context window of 32K tokens. I’ll explain more about what it is later.</li>



<li>An embedding model. I use text-embedding-3-large.</li>
</ul>



<h3 class="wp-block-heading">Let&#8217;s make it as easy as possible</h3>



<p>With that out of the way, we can dive in. Believe it or not, we can create a simple chat app in just a few lines of code. Copy and paste this into your Program.cs:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">using Microsoft.SemanticKernel;

namespace chatSampleApp01
{
    class Program
    {
        static string deploymentName = Environment.GetEnvironmentVariable("AI:OpenAI:DeploymentName")!;
        static string endpoint = Environment.GetEnvironmentVariable("AI:OpenAI:Endpoint")!;
        static string apiKey = Environment.GetEnvironmentVariable("AI:OpenAI:APIKey")!;

        static async Task Main(string[] args)
        {
            //Initialize Semantic Kernel
            var builder = Kernel.CreateBuilder();
            builder.AddAzureOpenAIChatCompletion(deploymentName, endpoint, apiKey);
            
            var kernel = builder.Build();

            //Question and Answer loop
            string question;
            while (true)
            {
                Console.Write("Me: ");
                question = Console.ReadLine()!;
                Console.Write("Mine Copilot: ");
                Console.WriteLine(await kernel.InvokePromptAsync(question));
                Console.WriteLine();
            }
        }
    }
}</pre>



<p>To prevent accidentally revealing my API key, which should be safeguarded like a password, I have stored it in an environment variable and accessed it using GetEnvironmentVariable. Then I created a new kernel using the Semantic Kernel APIs and added an OpenAI chat completion service to it. The Microsoft.SemanticKernel package we imported earlier includes references to client support for both OpenAI and Azure OpenAI, eliminating the need for additional components to communicate with these services. With this configuration, we can now run our chat app using dotnet run, enter questions, and receive responses from the service.&nbsp;</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="1115" height="628" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_01.png" alt="semantic kernel" class="wp-image-28866" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 13" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_01.png 1115w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_01-300x169.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_01-768x433.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_01-495x279.png 495w" sizes="auto, (max-width: 1115px) 100vw, 1115px" /></figure>
</div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>The expression await kernel.InvokePromptAsync(question) is the core of the interaction with the LLM, where it captures the user&#8217;s input and sends it to the LLM, receiving a string response in return. Semantic Kernel is equipped to handle various function types, including prompt functions for text-based AI interactions and standard .NET methods capable of executing any C# code. These functions can be triggered directly by the user, as shown in this example, or as part of a &#8220;plan&#8221; where a set of functions is provided to the LLM to formulate a strategy to achieve a specified objective. Semantic Kernel can execute these functions as per the plan (I will show it later). Additionally, some models support a &#8220;function calling&#8221; feature, which is also simplified by Semantic Kernel.</p>



<h2 class="wp-block-heading" id="Leveraging-Semantic-functions:-reusable-prompts,-dynamic-input-handling,-and-plugins">Leveraging Semantic functions: reusable prompts, dynamic input handling, and plugins</h2>



<p>In this instance, &#8220;function&#8221; refers to the user&#8217;s input, such as the question &#8220;What is Inetum Polska?&#8221; which is then processed by the LLM through the InvokePromptAsync method. To clarify the concept of &#8220;function,&#8221; we can extract it into a separate entity using the `CreateFunctionFromPrompt&#8220; method, allowing us to reuse the same function for multiple inputs. This approach eliminates the need to create a new function for each input, but requires a way to incorporate the user&#8217;s input into the existing function. Semantic Kernel supports this through prompt templates, which include placeholders that are filled with the appropriate variables and functions. For example, if the sample is run again with a request for the current time, the LLM will not be able to provide an answer:</p>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="1115" height="299" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_03.png" alt="semantic kernel" class="wp-image-28870" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 14" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_03.png 1115w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_03-300x80.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_03-768x206.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_03-495x133.png 495w" sizes="auto, (max-width: 1115px) 100vw, 1115px" /></figure>
</div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>To anticipate such inquiries, we can equip the LLM with the necessary information within the prompt itself. I have registered a function with the kernel that provides the current date and time. Subsequently, I created a prompt function that utilizes a prompt template to invoke this time function during the prompt&#8217;s rendering. This template also incorporates the value of the $input variable. It is possible to pass any number of arguments with arbitrary names using a KernelArguments dictionary; in this case, I have chosen to name one &#8220;input&#8221;. Functions are organized into collections known as &#8220;plugins&#8221;.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">using Microsoft.SemanticKernel;

namespace chatSampleApp02
{
    class Program
    {
        static string deploymentName = Environment.GetEnvironmentVariable("AI:OpenAI:DeploymentName")!;
        static string endpoint = Environment.GetEnvironmentVariable("AI:OpenAI:Endpoint")!;
        static string apiKey = Environment.GetEnvironmentVariable("AI:OpenAI:APIKey")!;

        static async Task Main(string[] args)
        {
            //Initialize Semantic Kernel
            var builder = Kernel.CreateBuilder();
            builder.AddAzureOpenAIChatCompletion(deploymentName, endpoint, apiKey);

            // Create the prompt function as part of a plugin and add it to the kernel.
            builder.Plugins.AddFromFunctions(
                pluginName: "DateTimeHelpers",
                functions: [
                    KernelFunctionFactory.CreateFromMethod(
                        ()=> $"{DateTime.UtcNow:r}", "Now", "Gets the current date and time"
                    )
                ]);

            var kernel = builder.Build();

            var kernelFunction = KernelFunctionFactory.CreateFromPrompt(
                promptTemplate: @"
                    The current date and time is {{ datetimehelpers.now }}.
                    {{ $input }}"
                );

            //Question and Answer loop
            string question;
            while (true)
            {
                Console.Write("Me: ");
                question = Console.ReadLine()!;
                Console.Write("Mine Copilot: ");
                Console.WriteLine(await kernelFunction.InvokeAsync(kernel, new() { ["input"] = question }));
                Console.WriteLine();
            }
        }
    }
}
</pre>



<p>When the function is activated, it renders the prompt by calling the previously registered &#8216;Now&#8217; function and integrating its output into the prompt. Now, posing the same question yields a more comprehensive answer.&nbsp;</p>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="1115" height="367" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_04.png" alt="semantic kernel" class="wp-image-28873" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 15" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_04.png 1115w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_04-300x99.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_04-768x253.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_04-495x163.png 495w" sizes="auto, (max-width: 1115px) 100vw, 1115px" /></figure>
</div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<h2 class="wp-block-heading" id="Does-LLM-have-memory?-How-to-use-Semantic-Kernel-to-overcome-statelessness-in-chat-agents">Does LLM have memory? How to use Semantic Kernel to overcome statelessness in chat agents&nbsp;</h2>



<p>We have made significant strides: with just a few lines of code, we have crafted a basic chat agent that can field repeated questions and provide responses. Moreover, we have managed to furnish it with extra prompt information to aid in answering questions it would otherwise be unable to tackle. Yet, in doing so, we have also fashioned a chat agent devoid of memory, lacking any awareness of prior conversations:&nbsp;</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="1115" height="894" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_02.png" alt="semantic kernel" class="wp-image-28876" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 16" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_02.png 1115w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_02-300x241.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_02-768x616.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_02-493x395.png 493w" sizes="auto, (max-width: 1115px) 100vw, 1115px" /></figure>
</div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>To remedy the statelessness of LLMs and their lack of memory, we must maintain a record of our chat history and incorporate it into each prompt request. This can be done manually by integrating the chat history into the prompt, or we can rely on Semantic Kernel to handle it for us, which in turn can depend on the clients for Azure OpenAI, OpenAI, or any other chat service. The latter approach involves using the registered IChatCompletionService to create a new chat, which is essentially a compilation of all messages. This method not only processes requests and outputs responses but also archives them into the chat history.&nbsp;</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;

namespace chatSampleApp03
{
    class Program
    {
        static string deploymentName = Environment.GetEnvironmentVariable("AI:OpenAI:DeploymentName")!;
        static string endpoint = Environment.GetEnvironmentVariable("AI:OpenAI:Endpoint")!;
        static string apiKey = Environment.GetEnvironmentVariable("AI:OpenAI:APIKey")!;

        static async Task Main(string[] args)
        {
            //Initialize Semantic Kernel
            var builder = Kernel.CreateBuilder();
            builder.AddAzureOpenAIChatCompletion(deploymentName, endpoint, apiKey);
            var kernel = builder.Build();

            //Create new chat
            var chatService = kernel.GetRequiredService&lt;IChatCompletionService>();
            var chat = new ChatHistory(
                    systemMessage: "You are an AI assistant that helps people find information."
                );

            //Question and Answer loop
            string question;
            while (true)
            {
                Console.Write("Me: ");
                question = Console.ReadLine()!;
                chat.AddUserMessage(question);

                Console.Write("Mine Copilot: ");
                var answer = await chatService.GetChatMessageContentAsync(chat);
                chat.AddAssistantMessage(answer.Content!);
                Console.WriteLine(answer);
                
                Console.WriteLine();
            }
        }
    }
}</pre>



<p>With that chat history rendered into an appropriate prompt, we then get back much more satisfying results:&nbsp;</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="1115" height="789" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_05.png" alt="semantic kernel" class="wp-image-28879" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 17" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_05.png 1115w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_05-300x212.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_05-768x543.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_05-495x350.png 495w" sizes="auto, (max-width: 1115px) 100vw, 1115px" /></figure>
</div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>In a practical application, it is crucial to consider various additional factors, such as the data processing limitations of language models, referred to as the &#8220;context window.&#8221; The `GPT-4-32k (0613) model that I am using here can handle ~32000 tokens, where a token can be a full word, part of a word, or a single character. Additionally, each token incurs a cost for every interaction. Therefore, when transitioning from a trial phase to full production, it becomes essential to monitor the chat history&#8217;s data volume closely and manage it by removing unnecessary parts, etc.</p>



<p>We can enhance the user experience by adding a small segment of code that accelerates the interaction. These large language models (LLMs) generate responses by predicting the next token, so while we have been displaying the complete response once it is fully generated, we can actually present it in real time as it is being formulated. This functionality is available in Semantic Kernel through IAsyncEnumerable, which allows for convenient integration using await foreach loops to stream the response incrementally.</p>



<figure class="wp-block-video"><video height="876" style="aspect-ratio: 1116 / 876;" width="1116" controls src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_06.mp4"></video></figure>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<h2 class="wp-block-heading" id="Does-LLM-know-everything?-Connectors-and-RAG-with-Semantic-plugins-and-native-functions">Does LLM know everything? Connectors and RAG with Semantic plugins and native functions</h2>



<p>We have now reached a point where we can pose questions and receive answers, maintain a record of these exchanges to refine future responses, and even broadcast our findings. But is our work complete? Not quite.</p>



<p>As it stands, the only information available to LLM for providing answers is the data it was initially trained on, plus any additional information we explicitly include in the prompt (like</p>



<p>the current time, as previously mentioned). Consequently, if we inquire about topics outside the LLM&#8217;s training or areas where its knowledge is lacking, the responses we receive may be unhelpful, misleading, or entirely incorrect, which are often referred to as &#8216;hallucinations&#8217;.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="1115" height="334" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_07.png" alt="semantic kernel" class="wp-image-28882" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 18" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_07.png 1115w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_07-300x90.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_07-768x230.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_07-495x148.png 495w" sizes="auto, (max-width: 1115px) 100vw, 1115px" /></figure>
</div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>The question is about the latest C# 12 changes, which were released after this version of the GPT-4-32k (0613) model was released (November 2023 vs October 2021). The model has no information about the newest capabilities, so it does not give a reasonable answer to the first question. We need to find a way to teach it about the things the user is asking about.</p>



<p>We know the way to teach LLM: include the necessary information in the prompt. For instance, the Microsoft Learn articles:</p>



<ul class="wp-block-list">
<li><a href="https://github.com/dotnet/docs/blob/main/docs/csharp/whats-new/relationships-between-language-and-library.md" target="_blank" rel="noreferrer noopener">Relationships between language features and library types</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/docs/blob/main/docs/csharp/whats-new/version-update-considerations.md" target="_blank" rel="noreferrer noopener">Version and update considerations for C# developers</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/docs/blob/main/docs/csharp/whats-new/csharp-version-history.md" target="_blank" rel="noreferrer noopener">The history of C#</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/docs/blob/main/docs/csharp/whats-new/csharp-11.md" target="_blank" rel="noreferrer noopener">What&#8217;s new in C# 11</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/roslyn/blob/main/docs/compilers/CSharp/Compiler+Breaking+Changes+-+DotNet+7.md" target="_blank" rel="noreferrer noopener">Breaking changes in Roslyn after .NET 6 all the way to .NET 7</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/docs/blob/main/docs/csharp/whats-new/csharp-12.md" target="_blank" rel="noreferrer noopener">What&#8217;s new in C# 12</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/roslyn/blob/main/docs/compilers/CSharp/Compiler+Breaking+Changes+-+DotNet+8.md" target="_blank" rel="noreferrer noopener">Breaking changes in Roslyn after .NET 7 all the way to .NET 8</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/docs/blob/main/docs/csharp/whats-new/csharp-13.md" target="_blank" rel="noreferrer noopener">What&#8217;s new in C# 13</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/roslyn/blob/main/docs/compilers/CSharp/Compiler+Breaking+Changes+-+DotNet+9.md" target="_blank" rel="noreferrer noopener">Breaking changes in Roslyn after .NET 8 all the way to .NET 9</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/docs/blob/main/docs/core/whats-new/dotnet-8/overview.md" target="_blank" rel="noreferrer noopener">What&#8217;s new in .NET 8</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/docs/blob/main/docs/core/whats-new/dotnet-8/runtime.md" target="_blank" rel="noreferrer noopener">What&#8217;s new in the .NET 8 runtime</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/docs/blob/main/docs/core/whats-new/dotnet-8/sdk.md" target="_blank" rel="noreferrer noopener">What&#8217;s new in the SDK and tooling for .NET 8</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/docs/blob/main/docs/core/whats-new/dotnet-8/containers.md" target="_blank" rel="noreferrer noopener">What&#8217;s new in containers for .NET 8</a>&nbsp;</li>
</ul>



<p>which were published after the training of this GPT-4-32k (0613) model have a detailed section regarding the new capabilities of C# 12. By incorporating this content into the prompt, we can supply the LLM with the required knowledge. In the following example, I have expanded the previous code to download the web page content and then insert it into a user message.</p>



<p>This approach ensures that the LLM is provided with the latest information to assist with the query.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using System.Text;

namespace chatSampleApp05
{
    internal class Program
    {
        static string deploymentName = Environment.GetEnvironmentVariable("AI:OpenAI:DeploymentName")!;
        static string endpoint = Environment.GetEnvironmentVariable("AI:OpenAI:Endpoint")!;
        static string apiKey = Environment.GetEnvironmentVariable("AI:OpenAI:APIKey")!;

        static async Task Main(string[] args)
        {
            //Initialize Semantic Kernel
            var builder = Kernel.CreateBuilder();
            builder.AddAzureOpenAIChatCompletion(deploymentName, endpoint, apiKey);
            var kernel = builder.Build();

            //Create new chat
            var chatService = kernel.GetRequiredService&lt;IChatCompletionService>();
            var chat = new ChatHistory(
                    systemMessage: "You are an AI assistant that helps people find information."
                );

            // Download a documents and add all of its contents to our chat
            var articleList = new List&lt;string>
            {
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/relationships-between-language-and-library.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/version-update-considerations.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-version-history.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-11.md",
                "https://raw.githubusercontent.com/dotnet/roslyn/main/docs/compilers/CSharp/Compiler%20Breaking%20Changes%20-%20DotNet%207.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-12.md",
                "https://raw.githubusercontent.com/dotnet/roslyn/main/docs/compilers/CSharp/Compiler%20Breaking%20Changes%20-%20DotNet%208.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-13.md",
                "https://raw.githubusercontent.com/dotnet/roslyn/main/docs/compilers/CSharp/Compiler%20Breaking%20Changes%20-%20DotNet%209.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/overview.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/runtime.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/sdk.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/containers.md"

            };
            
            var articleStringBuilder = new StringBuilder();

            using (var httpClient = new HttpClient())
            {
                foreach (var article in articleList) {
                    articleStringBuilder.Append(await httpClient.GetStringAsync(article));
                }

                chat.AddUserMessage($"Here's some additional information: {articleStringBuilder.ToString()}");
            }

            string question;
            StringBuilder stringBuilder = new StringBuilder();

            //Question and Answer loop
            while (true)
            {
                Console.Write("Me: ");
                question = Console.ReadLine()!;
                chat.AddUserMessage(question);

                stringBuilder.Clear();
                Console.Write("Mine Copilot: ");

                await foreach (var message in chatService.GetStreamingChatMessageContentsAsync(chat))
                {
                    Console.Write(message);
                    stringBuilder.Append(message.Content);
                }
                Console.WriteLine();
                chat.AddAssistantMessage(stringBuilder.ToString());
                Console.WriteLine();
            }
        }
    }
}</pre>



<p>and the result?</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="1115" height="362" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_08.png" alt="semantic kernel" class="wp-image-28885" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 19" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_08.png 1115w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_08-300x97.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_08-768x249.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_08-495x161.png 495w" sizes="auto, (max-width: 1115px) 100vw, 1115px" /></figure>
</div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>We have gone over the context window almost twice without adding any history to the conversation. We obviously need to include less information, but still need to ensure it is relevant information. RAG will help us&#8230;&nbsp;</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td></style><div class="promotion-box promotion-box--image-left "><div class="tiles latest-news-once"><div class="tile"><div class="tile-image"><img decoding="async" src="https://nearshore-it.eu/wp-content/uploads/2024/07/Marek-Dobkowski-1-bezloczkowy-kwadrat.jpg" alt="Marek Dobkowski 1 bezloczkowy kwadrat" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 20"></div><div class="tile-content"><p class="entry-title client-name promotion-box__headline2">Interested in building AI-powered apps?</p>
<p class="promotion-box__description2">Connect with <strong>Marek Dobkowski</strong> to explore Microsoft solutions that simplify your workflow and enhance productivity.</p>
<a class="btn btn-primary booking" href="https://outlook.office365.com/book/BookameetingwithMarek1@gfi.fr/" target="_blank" rel="noopener">Schedule a meeting</a>






</div></div></div></div></td></tr></tbody></table></figure>



<h3 class="wp-block-heading">Using RAG</h3>



<p><strong>R</strong>etrieval <strong>A</strong>ugmented <strong>G</strong>eneration (RAG) essentially means looking up relevant information and incorporating it into the prompt. Instead of including all possible information in the prompt, we index the additional information we care about. When a question is asked, we use that question to find the most relevant indexed content and add just that specific content to the prompt. To facilitate this process, we need embeddings.&nbsp;</p>



<p>An embedding can be thought of as a vector (array) of floating-point values that represents the content and its semantic meaning. We can use a model specifically designed for embeddings to generate such a vector for a given input, and then store both the vector and the original text in a database. Later, when a question is posed, we can process that question through the same model to produce a vector, which we then use to find the most relevant embeddings in our database. We are not necessarily looking for exact matches, but rather for sufficiently similar ones. The term ‘close’ here is quite literal, as the lookups are typically performed using distance measures like cosine similarity. For instance, consider this program:&nbsp;</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Embeddings;
using System.Numerics.Tensors;

namespace chatSampleApp06
{
    internal class Program
    {
        static string embeddingDeploymentName = Environment.GetEnvironmentVariable("AI:OpenAI:EmbeddingDeploymentName")!;
        static string endpoint = Environment.GetEnvironmentVariable("AI:OpenAI:Endpoint")!;
        static string apiKey = Environment.GetEnvironmentVariable("AI:OpenAI:APIKey")!;

        static async Task Main(string[] args)
        {
            //Experimental
            #pragma warning disable SKEXP0010,SKEXP0001

            //Initialize Semantic Kernel
            var builder = Kernel.CreateBuilder();

            builder.AddAzureOpenAITextEmbeddingGeneration(embeddingDeploymentName, endpoint, apiKey);

            var kernel = builder.Build();

            var input = "What is a reptile?";
            var examples = new string[]
            {
                "What is a reptile?",
                "¿Qué es un reptil?",
                "Was ist ein Reptil?",
                "A turtle is a reptile.",
                "Eidechse ist ein Beispiel für Reptilien.",
                "Crocodiles, lizards, snakes, and turtles are all examples.",
                "A frog is green.",
                "A grass is green.",
                "A cat is a mammal.",
                "A dog is a man's best friend.",
                "My best friend is Mike.",
                "I'm working at Inetum Polska since 2013."
            };

            // Generate embeddings for each piece of text
            var embeddingGenerator = kernel.GetRequiredService&lt;ITextEmbeddingGenerationService>();
            var inputEmbedding = (await embeddingGenerator.GenerateEmbeddingsAsync([input])).First();

            var exampleEmbeddings = (await embeddingGenerator.GenerateEmbeddingsAsync(examples)).ToArray();
            var similarities = new List&lt;Tuple&lt;float, string>>();

            // Print the cosine similarity between the input and each example
            for (int i = 0; i &lt; exampleEmbeddings.Length; i++)
            {
                similarities.Add(
                    new Tuple&lt;float, string>(
                        TensorPrimitives.CosineSimilarity(exampleEmbeddings[i].Span, inputEmbedding.Span), 
                        examples[i]));
            }

            similarities.Sort((x,y) => y.Item1.CompareTo(x.Item1));

            Console.WriteLine("Similarity\tExample");
            foreach (var similarity in similarities) {
                Console.WriteLine($"{similarity.Item1:F6}\t{similarity.Item2}");
            }
            Console.ReadLine();
        }
    }
}</pre>



<p>This process utilizes the AzureOpenAI embedding generation service to obtain an embedding vector (using the text-embedding-3-large model mentioned earlier in the post) for both an input and several other pieces of text. It then compares the resulting embedding for the input with the embeddings of those other texts, sorts the results based on similarity, and prints them out.&nbsp;</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="1115" height="390" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_09.png" alt="semantic kernel" class="wp-image-28888" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 21" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_09.png 1115w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_09-300x105.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_09-768x269.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_09-495x173.png 495w" sizes="auto, (max-width: 1115px) 100vw, 1115px" /></figure>
</div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>Let&#8217;s incorporate this concept into the chat app. In this round, I have augmented the previous chat example with a few things:&nbsp;</p>



<ul class="wp-block-list">
<li>In order for Semantic Kernel to handle the embedding generation through its abstractions, we need to include its Memory package. Please note the &#8211;prerelease flag, as this is an evolving area. While some Semantic Kernel components are stable, others are still in development and therefore marked as prerelease.&nbsp;</li>
</ul>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">dotnet add package Microsoft.SemanticKernel.Plugins.Memory --prerelease </pre>



<ul class="wp-block-list">
<li>Next, I need to create an ISemanticTextMemory for querying. I achieved this by using MemoryBuilder to combine an embeddings generator with a database. I specified the Azure OpenAI service as my embeddings generator using the WithAzureTextEmbeddingGenerationService method. For the store, I registered a VolatileMemoryStore instance using the WithMemoryStore method. Although we will change this later, it will suffice for now. VolatileMemoryStore is essentially an implementation of Semantic Kernel&#8217;s IMemoryStore abstraction that wraps an in-memory dictionary.&nbsp;</li>



<li>I downloaded the text and used Semantic Kernel&#8217;s TextChunker to break it into pieces. Then, I saved each piece to the memory store using `SaveInformationAsync&#8220;. This process generates an embedding for the text and stores the resulting vector along with the input text in the dictionary.&nbsp;</li>



<li>When it is time to ask a question, instead of just adding the question to the chat history and submitting it, we first use the question to perform a SearchAsync on the memory store. This generates an embedding vector for the question and searches the store for the closest vectors. I have it return the three closest matches, append the associated text together, add the results to the chat history, and submit it. After submitting the request, I remove this additional context from the chat history to avoid sending it again in subsequent requests, as it can consume much of the allowed context window.&nbsp;</li>
</ul>



<p>And full source code&#8230;&nbsp;</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.OpenAI;
using Microsoft.SemanticKernel.Memory;
using Microsoft.SemanticKernel.Text;
using System.Net;
using System.Text;
using System.Text.RegularExpressions;

namespace chatSampleApp07
{
    internal class Program
    {
        static string deploymentName = Environment.GetEnvironmentVariable("AI:OpenAI:DeploymentName")!;
        static string embeddingDeploymentName = Environment.GetEnvironmentVariable("AI:OpenAI:EmbeddingDeploymentName")!;
        static string endpoint = Environment.GetEnvironmentVariable("AI:OpenAI:Endpoint")!;
        static string apiKey = Environment.GetEnvironmentVariable("AI:OpenAI:APIKey")!;

        static async Task Main(string[] args)
        {
            //Experimental
            #pragma warning disable SKEXP0001, SKEXP0010, SKEXP0050

            //Initialize Semantic Kernel
            var builder = Kernel.CreateBuilder();
            builder.AddAzureOpenAIChatCompletion(deploymentName, endpoint, apiKey);
            var kernel = builder.Build();

            //Initialize Memory Builder
            var memoryBuilder = new MemoryBuilder()
                .WithMemoryStore(new VolatileMemoryStore())
                .WithAzureOpenAITextEmbeddingGeneration(
                    deploymentName: embeddingDeploymentName,
                    endpoint: endpoint,
                    apiKey: apiKey
                );

            var memory = memoryBuilder.Build();

            // Download a documents and add all of its contents to our chat
            var articleList = new List&lt;string>
            {
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/relationships-between-language-and-library.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/version-update-considerations.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-version-history.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-11.md",
                "https://raw.githubusercontent.com/dotnet/roslyn/main/docs/compilers/CSharp/Compiler%20Breaking%20Changes%20-%20DotNet%207.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-12.md",
                "https://raw.githubusercontent.com/dotnet/roslyn/main/docs/compilers/CSharp/Compiler%20Breaking%20Changes%20-%20DotNet%208.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-13.md",
                "https://raw.githubusercontent.com/dotnet/roslyn/main/docs/compilers/CSharp/Compiler%20Breaking%20Changes%20-%20DotNet%209.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/overview.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/runtime.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/sdk.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/containers.md"
            };

            var collectionName = "microsoft-news";

            using (var httpClient = new HttpClient())
            {
                var allParagraphs = new List&lt;string>();

                foreach (var article in articleList)
                {
                    var content = await httpClient.GetStringAsync(article);
                    var lines = TextChunker.SplitPlainTextLines(content, 64);
                    var paragraphs = TextChunker.SplitPlainTextParagraphs(lines, 512);

                    allParagraphs.AddRange(paragraphs);
                }

                for (var i = 0; i &lt; allParagraphs.Count; i++)
                {
                    await memory.SaveInformationAsync(collectionName, allParagraphs[i], $"paragraph[{i}]");
                }
            }

            //Create new chat
            var chatService = kernel.GetRequiredService&lt;IChatCompletionService>();
            var chat = new ChatHistory(
                    systemMessage: "You are an AI assistant that helps people find information."
                );

            string question;
            var responseBuilder = new StringBuilder();
            var contextBuilder = new StringBuilder();

            //Question and Answer loop
            while (true)
            {
                Console.Write("Me: ");
                question = Console.ReadLine()!;

                await foreach (var result in memory.SearchAsync(collectionName, question, limit: 3))
                {
                    contextBuilder.AppendLine(result.Metadata.Text);
                }

                var contextToRemove = -1;
                if (contextBuilder.Length > 0)
                {
                    contextBuilder.Insert(0, "Here's some additional information: ");
                    contextToRemove = chat.Count;
                    chat.AddUserMessage(contextBuilder.ToString());
                }

                chat.AddUserMessage(question);

                responseBuilder.Clear();
                Console.Write("Mine Copilot: ");

                await foreach (var message in chatService.GetStreamingChatMessageContentsAsync(chat,null, kernel))
                {
                    Console.Write(message);
                    responseBuilder.Append(message.Content);
                }

                Console.WriteLine();
                chat.AddAssistantMessage(responseBuilder.ToString());

                if (contextToRemove >= 0)
                {
                    chat.RemoveAt(contextToRemove);
                }

                Console.WriteLine();
            }
        }
    }
}</pre>



<p>The text chunking code divided the documents into 104 &#8220;paragraphs&#8221; resulting in 104 embeddings being created and stored in the database. The exciting part is that with all these embeddings, when we pose our question, the database retrieves the most relevant material and adds the additional text to the prompt. Now, when we ask the same questions as before, we receive a much more helpful and accurate response:</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="1115" height="570" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_10.png" alt="semantic kernel" class="wp-image-28891" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 22" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_10.png 1115w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_10-300x153.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_10-768x393.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_10-495x253.png 495w" sizes="auto, (max-width: 1115px) 100vw, 1115px" /></figure>
</div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<h2 class="wp-block-heading" id="Storing-memories">Storing memories</h2>



<p>Naturally, we do not want to reindex all documents every time the application starts. Imagine this was a public website facilitating chats with thousands of users and hundreds of documents reindexing all content each time – the application restart process would not only be time-consuming but also unnecessarily expensive. For instance, the Azure OpenAI embedding model I use costs €0.000121 per 1,000 tokens (Azure OpenAI Service pricing), meaning indexing just those documents costs a couple of cents (but remember: &#8220;scale makes a difference&#8221;). </p>



<p>Therefore, we should switch to using persistent storage. Semantic Kernel provides various IMemoryStore implementations, and we can easily switch to one that persists in the results. For example, let&#8217;s switch to one based on Sqlite. To do this, we need another NuGet package:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">dotnet add package Microsoft.SemanticKernel.Connectors.Sqlite --prerelease</pre>



<p>and with that, we can change just one line of code to switch from the VolatileMemoryStore:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">.WithMemoryStore(new VolatileMemoryStore())</pre>



<p>to the SqliteMemoryStore:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">.WithMemoryStore(await SqliteMemoryStore.ConnectAsync("data\\rag-data.db"))</pre>



<p>Sqlite is an embedded SQL database engine that operates within the same process and stores its data in standard disk files. In this case, it will connect to a rag-data.db file, creating it if it does not already exist. However, if we were to run this, we would still end up generating the embeddings again, as our previous example did not include a check to see if the data already existed. Therefore, our final step is to add a guard to prevent this redundant work.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">var collectionName = "microsoft-news";
var collections = await memory.GetCollectionsAsync();

if (!collections.Contains(collectionName))
{                       
    ... // same code as before to download and process the documents
}
else
{
    Console.WriteLine($"Found '{collectionName}' in RAG database");
}</pre>



<p>You get the idea. Here is the complete version using Sqlite:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.OpenAI;
using Microsoft.SemanticKernel.Connectors.Sqlite;
using Microsoft.SemanticKernel.Memory;
using Microsoft.SemanticKernel.Text;
using System.Text;

namespace chatSampleApp08
{
    internal class Program
    {
        static string deploymentName = Environment.GetEnvironmentVariable("AI:OpenAI:DeploymentName")!;
        static string embeddingDeploymentName = Environment.GetEnvironmentVariable("AI:OpenAI:EmbeddingDeploymentName")!;
        static string endpoint = Environment.GetEnvironmentVariable("AI:OpenAI:Endpoint")!;
        static string apiKey = Environment.GetEnvironmentVariable("AI:OpenAI:APIKey")!;

        static async Task Main(string[] args)
        {
            //Experimental
            #pragma warning disable SKEXP0001, SKEXP0010, SKEXP0020, SKEXP0050

            //Initialize Semantic Kernel
            var builder = Kernel.CreateBuilder();
            builder.AddAzureOpenAIChatCompletion(deploymentName, endpoint, apiKey);
            var kernel = builder.Build();

            //Initialize Memory Builder
            var memoryBuilder = new MemoryBuilder()
                .WithMemoryStore(await SqliteMemoryStore.ConnectAsync("data\\rag-data.db"))
                .WithAzureOpenAITextEmbeddingGeneration(
                    deploymentName: embeddingDeploymentName,
                    endpoint: endpoint,
                    apiKey: apiKey
                );

            var memory = memoryBuilder.Build();

            var collectionName = "microsoft-news";
            var collections = await memory.GetCollectionsAsync();
            
            if (!collections.Contains(collectionName))
            {                       
                // Download a documents and add all of its contents to our chat
                var articleList = new List&lt;string>
                {
                    "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/relationships-between-language-and-library.md",
                    "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/version-update-considerations.md",
                    "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-version-history.md",
                    "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-11.md",
                    "https://raw.githubusercontent.com/dotnet/roslyn/main/docs/compilers/CSharp/Compiler%20Breaking%20Changes%20-%20DotNet%207.md",
                    "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-12.md",
                    "https://raw.githubusercontent.com/dotnet/roslyn/main/docs/compilers/CSharp/Compiler%20Breaking%20Changes%20-%20DotNet%208.md",
                    "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-13.md",
                    "https://raw.githubusercontent.com/dotnet/roslyn/main/docs/compilers/CSharp/Compiler%20Breaking%20Changes%20-%20DotNet%209.md",
                    "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/overview.md",
                    "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/runtime.md",
                    "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/sdk.md",
                    "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/containers.md"
                };

                using (var httpClient = new HttpClient())
                {
                    var allParagraphs = new List&lt;string>();

                    foreach (var article in articleList)
                    {
                        var content = await httpClient.GetStringAsync(article);
                        var lines = TextChunker.SplitPlainTextLines(content, 64);
                        var paragraphs = TextChunker.SplitPlainTextParagraphs(lines, 512);

                        allParagraphs.AddRange(paragraphs);
                    }

                    for (var i = 0; i &lt; allParagraphs.Count; i++)
                    {
                        await memory.SaveInformationAsync(collectionName, allParagraphs[i], $"paragraph[{i}]");
                    }
                }
            }
            else
            {
                Console.WriteLine($"Found '{collectionName}' in RAG database");
            }

            //Create new chat
            var chatService = kernel.GetRequiredService&lt;IChatCompletionService>();
            var chat = new ChatHistory(
                    systemMessage: "You are an AI assistant that helps people find information."
                );

            string question;
            var responseBuilder = new StringBuilder();
            var contextBuilder = new StringBuilder();

            //Question and Answer loop
            while (true)
            {
                Console.Write("Me: ");
                question = Console.ReadLine()!;

                await foreach (var result in memory.SearchAsync(collectionName, question, limit: 3))
                {
                    contextBuilder.AppendLine(result.Metadata.Text);
                }

                var contextToRemove = -1;
                if (contextBuilder.Length > 0)
                {
                    contextBuilder.Insert(0, "Here's some additional information: ");
                    contextToRemove = chat.Count;
                    chat.AddUserMessage(contextBuilder.ToString());
                }

                chat.AddUserMessage(question);

                responseBuilder.Clear();
                Console.Write("Mine Copilot: ");

                await foreach (var message in chatService.GetStreamingChatMessageContentsAsync(chat, null, kernel))
                {
                    Console.Write(message);
                    responseBuilder.Append(message.Content);
                }

                Console.WriteLine();
                chat.AddAssistantMessage(responseBuilder.ToString());

                if (contextToRemove >= 0)
                {
                    chat.RemoveAt(contextToRemove);
                }

                Console.WriteLine();
            }
        }
    }
}</pre>



<p>Now, when we run it, the first invocation will still index everything, but after that, the data will already be indexed:</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="620" height="75" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_11.png" alt="semantic kernel" class="wp-image-28894" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 23" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_11.png 620w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_11-300x36.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_11-495x60.png 495w" sizes="auto, (max-width: 620px) 100vw, 620px" /></figure>
</div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>and subsequent invocations are able to simply use it.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="592" height="191" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_12.png" alt="semantic kernel" class="wp-image-28897" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 24" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_12.png 592w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_12-300x97.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_12-495x160.png 495w" sizes="auto, (max-width: 592px) 100vw, 592px" /></figure>
</div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>While Sqlite is a fantastic tool, it is not specifically optimized for performing these types of searches. In fact, the code for this SqliteMemoryStore in SK merely enumerates the entire database and performs a `CosineSimilarity&#8220; check on each entry.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">// from: https://github.com/microsoft/semantic-kernel/blob/9264b3e0b42e184b7e9e8b2a073d8a721c4af92a/dotnet/src/Connectors/Connectors.Memory.Sqlite/SqliteMemoryStore.cs#L135

await foreach (var record in this.GetAllAsync(collectionName, cancellationToken).ConfigureAwait(false))
{
    if (record is not null)
    {
        double similarity = TensorPrimitives.CosineSimilarity(embedding.Span, record.Embedding.Span);
        ...
    }
}</pre>



<p>For real scale and the ability to share data across multiple frontends, we need a dedicated &#8216;vector database&#8217; designed for storing and searching embeddings. There are many such vector databases available now, including Azure AI Search, Chroma, Milvus, Pinecone,</p>



<p>Qdrant, Weaviate, and many more&#8230; We can easily set one of these up, change our `WithMemoryStore&#8220; call to use the appropriate connector, and we are ready to go. Let&#8217;s proceed with that. For this example, I have chosen Azure AI Search.</p>



<p>I add the relevant Semantic Kernel &#8220;connector&#8221; to my project:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">dotnet add package Microsoft.SemanticKernel.Connectors.AzureAISerach --prerelease </pre>



<p>and then add a couple of lines:&nbsp;</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">static string azureAISearchEndpoint = Environment.GetEnvironmentVariable("AI:AzureAISearch:Endpoint")!;
static string azureAISearchApiKey = Environment.GetEnvironmentVariable("AI:AzureAISearch:APIKey")!;</pre>



<p>change from:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">.WithMemoryStore(await SqliteMemoryStore.ConnectAsync("data\\rag-data.db"))</pre>



<p>to:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">.WithMemoryStore(new AzureAISearchMemoryStore(azureAISearchEndpoint, azureAISearchApiKey)) </pre>



<p>And that&#8217;s it! The application works as before but much faster.&nbsp;&nbsp;</p>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" decoding="async" width="1296" height="821" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_13-1296x821.png" alt="semantic kernel" class="wp-image-28900" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 25" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_13-1296x821.png 1296w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_13-300x190.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_13-768x487.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_13-495x314.png 495w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_13.png 1321w" sizes="auto, (max-width: 1296px) 100vw, 1296px" /></figure>
</div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<h2 class="wp-block-heading" id="Enhancing-your-Semantic-integration:-use-the-Semantic-best-practices-and-considerations">Enhancing your Semantic integration: use the Semantic best practices and considerations</h2>



<p>Wow! Clearly, I have left out many crucial details that any real application would need to address. For instance, how should the data being indexed be cleaned, normalized, and chunked? How should errors be managed? How can we limit the amount of data sent with each request, such as restricting chat history or the size of the found embeddings? How to make the application more secure (API Key vs Managed Identity)? Which service is the best for storing all the information? </p>



<p>And there are many other considerations, including making the UI much more attractive than my basic Console.WriteLine calls. Despite these missing details, I hope it is evident that you can start integrating this kind of functionality into your applications right away.</p>



<p><strong>Also read:</strong><a href="https://nearshore-it.eu/technologies/python-pandas-tutorial-check-our-complete-introduction-to-pandas/"><strong> </strong>Introduction to Python Pandas Libraries</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://nearshore-it.eu/articles/create-ai-chat-with-semantic-kernel/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure url="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_06.mp4" length="24632603" type="video/mp4" />

		<media:content url="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_06.mp4" medium="video" width="1116" height="876">
			<media:player url="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_06.mp4" />
			<media:title type="plain">Technologies Archives - Nearshore Software Development Company - IT Outsourcing Services</media:title>
			<media:thumbnail url="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_cover-1.jpg" />
			<media:rating scheme="urn:simple">nonadult</media:rating>
		</media:content>
	</item>
		<item>
		<title>Data Mesh vs Data Lake: Choosing the Perfect Data Architecture for Your Business </title>
		<link>https://nearshore-it.eu/articles/data-mesh-vs-data-lake/</link>
					<comments>https://nearshore-it.eu/articles/data-mesh-vs-data-lake/#respond</comments>
		
		<dc:creator><![CDATA[Tomasz Pajdo]]></dc:creator>
		<pubDate>Fri, 30 Aug 2024 08:52:20 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[Technologies]]></category>
		<category><![CDATA[Data]]></category>
		<guid isPermaLink="false">https://nearshore-it.eu/?p=28830</guid>

					<description><![CDATA[In this article, we will explore the basic differences between Data Mesh and Data Lake, delve into their strategies, compare how data products are managed, and discuss best practices for data integration in both approaches ]]></description>
										<content:encoded><![CDATA[
<p>In the ever-changing world of data, companies are constantly looking for the most effective ways to store, manage, and use data. Two popular two data management approaches are the traditional Data Lake and the modern Data Mesh. Each of them has its own strengths and weaknesses, which makes the choice between them crucial for organizations that want to maximize the potential of their data. In this article, we will explore the basic differences between Data Mesh and Data Lake, delve into their strategies, compare how data products are managed, and discuss best practices for data integration in both approaches<strong>&nbsp;</strong></p>



<div class="table-of-contents">
    <p class="title"></p>
    <ol>
                    <li><a href="#Data-Warehouse-vs-Data-Lake-vs-Data-Mesh">1.  Data Warehouse vs Data Lake vs Data Mesh </a></li>
                    <li><a href="#What-is-Data-Lake">2.  What is Data Lake </a></li>
                    <li><a href="#What-is-Data-Mesh?">3.  What is Data Mesh?</a></li>
                    <li><a href="#Data-Lake-vs-Data-Mesh-–-key-differences">4.  Data Lake vs Data Mesh – key differences </a></li>
                    <li><a href="#Data-Mesh-integration-vs.-Data-Lake-Best-Practices">5.  Data Mesh integration vs. Data Lake Best Practices</a></li>
                    <li><a href="#What-is-the-difference-between-Data-Mesh-vs-Data-Lake-data-products?">6.  What is the difference between Data Mesh vs Data Lake data products?</a></li>
                    <li><a href="#When-to-choose-Data-Lake">7.  When to choose Data Lake</a></li>
                    <li><a href="#When-to-choose-Data-Mesh">8.  When to choose Data Mesh </a></li>
            </ol>
</div>


<h2 class="wp-block-heading" id="Data-Warehouse-vs-Data-Lake-vs-Data-Mesh">Data Warehouse vs Data Lake vs Data Mesh&nbsp;</h2>



<p>Initially (in the late 1980&#8217;s), the destination of data was a data warehouse. It was powered by ETL processes, which means that data was extracted from the source systems, transformed into the target form, and loaded into the target destination. What has been constantly changing over the past years is the target form of data to be analyzed.  &nbsp;</p>



<h2 class="wp-block-heading" id="What-is-Data-Lake">What is Data Lake&nbsp;</h2>



<p><strong>Definition and purpose</strong>&nbsp;</p>



<p>The Data Lake approach is a centralized repository that allows companies to store all structured and unstructured data at any scale. Unlike traditional databases that store data in tables and columns, Data Lake stores data in its raw form until it is needed for analysis. This means you can store everything from raw logs, photos, videos, to processed datasets in Data Lake.&nbsp;</p>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="756" height="200" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.30_graphic_1.png" alt="data mesh vs data lake" class="wp-image-28840" title="Data Mesh vs Data Lake: Choosing the Perfect Data Architecture for Your Business  26" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.30_graphic_1.png 756w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.30_graphic_1-300x79.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.30_graphic_1-495x131.png 495w" sizes="auto, (max-width: 756px) 100vw, 756px" /></figure>
</div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p><strong>Key characteristics</strong>&nbsp;</p>



<ol start="1" class="wp-block-list">
<li><strong>Scalability:</strong> Data Lake is highly scalable and allows you to store huge amounts of data from different sources. Using cloud solutions, it offers unlimited storage space for your data.&nbsp;&nbsp;</li>



<li><strong>Flexibility:</strong> As data is stored in raw format, Data Lake offers flexibility by allowing data scientists and data analysts to process data in different ways depending on their needs.&nbsp;</li>



<li><strong>Cost-effectiveness:</strong> Formats such as Parquet organizes data by columns rather than rows, which is very efficient for analytical queries that only access a subset of columns. This can lead to a significant reduction in storage requirements and costs.&nbsp;&nbsp;</li>



<li><strong>Data Variety:</strong> Data Lake can handle structured, semi-structured, and unstructured data, making it ideal for organizations that use a wide variety of data sources.&nbsp;&nbsp;</li>
</ol>



<h3 class="wp-block-heading">Challenges in data governance in distributed data structure&nbsp;</h3>



<p>Data Lake architecture is not without the challenges. Lack of structure can lead to the so-called data swamp, in which data management and their effective use becomes difficult. Without proper management, data quality can worsen, making it difficult to find the data you need for analysis and treat it as trustworthy.&nbsp;</p>



<h2 class="wp-block-heading" id="What-is-Data-Mesh?">What is Data Mesh?&nbsp;&nbsp;</h2>



<p><strong>Definition and purpose</strong>&nbsp;</p>



<p><a href="https://nearshore-it.eu/articles/unveiling-data-mesh-architecture-principles/" target="_blank" rel="noreferrer noopener">Data Mesh</a> is a relatively new concept that decentralizes data management by treating data as a product and assigning ownership to individual business domains. In the Data Mesh architecture, each domain – e.g. sales, marketing or finance – manages its own flows and data sets. This approach allows for more flexible and autonomous data management.&nbsp;</p>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="756" height="430" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.30_graphic_2.png" alt="data mesh vs data lake" class="wp-image-28843" title="Data Mesh vs Data Lake: Choosing the Perfect Data Architecture for Your Business  27" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.30_graphic_2.png 756w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.30_graphic_2-300x171.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.30_graphic_2-495x282.png 495w" sizes="auto, (max-width: 756px) 100vw, 756px" /></figure>
</div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>Key characteristics&nbsp;</p>



<ol start="1" class="wp-block-list">
<li><strong>Domain-oriented design:</strong> Data Mesh transfers data ownership to the domain teams that are closest to that data, making it easier to ensure its high quality.&nbsp;&nbsp;</li>



<li><strong>Scalability:</strong> Data Mesh provides scalability through decentralization. Each domain can be scaled independently, reducing bottlenecks and improving work efficiency on data.&nbsp;</li>



<li><strong>Data as a product:</strong> This principle emphasizes that data should be treated as a product, and dedicated teams care about its quality, availability and usability.&nbsp;</li>



<li><strong>Interoperability:</strong> Data Mesh promotes interoperability between different domains, ensuring that data can be easily shared and used across the organization.&nbsp;</li>
</ol>



<h3 class="wp-block-heading">Challenges in Data Mesh strategy&nbsp;</h3>



<p>The implementation of the Data Mesh architecture requires significant cultural and organizational changes. This requires domain teams to have the necessary skills and resources to manage their own data, which can be challenging in organizations, in&nbsp;</p>



<p>&nbsp;which data management has traditionally been centralized. Moreover, a decentralized nature can sometimes lead to inconsistencies without proper management.&nbsp;</p>


</style><div class="promotion-box promotion-box--image-left promotion-box--full-width-without-image"><div class="tiles latest-news-once"><div class="tile"><div class="tile-content"><p class="promotion-box__description2"><strong>Consult your project directly with a specialist</strong></p>
<a class="btn btn-primary booking" href="https://outlook.office365.com/book/BookameetingwithMarek@gfi.fr/" target="_blank" rel="noopener">Book a meeting</a></div></div></div></div>



<h2 class="wp-block-heading" id="Data-Lake-vs-Data-Mesh-–-key-differences">Data Lake vs Data Mesh – key differences</h2>



<p><strong>Centralization vs decentralization</strong>&nbsp;</p>



<p>The most significant difference between Data Lake and Data Mesh lies in the approach of these architectures to data management. Data Lake is a centralized repository. This means that all data is stored in one place and managed by a central team. Such centralization can facilitate the execution of standards and ensure consistency across the organization.&nbsp;</p>



<p>Data Mesh, on the other hand, is about decentralization. Each domain in the organization is responsible for its own data, i.e. data management and ownership are distributed among different teams. This can lead to greater flexibility and specialization in the field, but requires robust management to avoid fragmentation.&nbsp;</p>



<p><strong>Data owners</strong>&nbsp;</p>



<p>In the Data Lake architecture, the central IT or data team usually owns and manages the data. This can sometimes result in so-called bottlenecks, because the central team may not have the domain-specific knowledge needed to effectively manage data.&nbsp;</p>



<p>On the other hand, Data Mesh assigns data ownership to domain teams. This not only empowers those who are closest to the data, but also ensures that the data is managed with the specific needs of the domain in mind. However, this requires domain teams to have the necessary skills and resources for data management.&nbsp;</p>



<p><strong>Data Management in Data Mesh and Data Lake</strong>&nbsp;</p>



<p>Data management is a key factor in any data architecture. In Data Lake, management is often easier to enforce because all data is centralized. However, this can lead to rigid structures that may not meet the needs of all users.&nbsp;</p>



<p>Data Mesh requires a management approach where each domain is responsible for managing its own data. This can lead to more flexible and domain-specific management policies, but it also calls for a strong, overarching management framework to ensure consistency and interoperability across the organization.&nbsp;</p>



<p><strong>Scalability</strong>&nbsp;</p>



<p>Both Data Lake and Data Mesh offer scalability, but they do so in different ways. Data Lake is scaling by adding more space and computing power to a centralized repository. This can be cost-effective, but it can also impact performance negatively as the system develops.&nbsp;&nbsp;</p>



<p>Scaling in Data Mesh is done by distributing data management across different domains. Each domain can be scaled independently, reducing the risk of bottlenecks and enabling more flexible and responsive data management. However, in this approach management can be more complicated because it requires coordination between various domains.&nbsp;</p>



<p><strong>Flexibility and agility</strong>&nbsp;</p>



<p>Flexibility is another area in which these architectures differ significantly. Data Lake provides flexibility in terms of the types of data it can store and way of processing. However, its centralized nature can sometimes limit flexibility, as changes or new requirements may involve the need for a central team to process them.&nbsp;</p>



<p>Data Mesh is inherently more flexible because each domain can manage its own data as needed. This can have a positive impact on innovation and enable the adaptation of data management to business needs. However, this flexibility comes at the expense of increased complexity and the need for robust coordination.&nbsp;</p>



<p><strong>Read also:</strong></p>



<ul class="wp-block-list">
<li><a href="https://nearshore-it.eu/articles/data-mesh-vs-data-fabric-a-guide-to-better-data-management/" data-type="link" data-id="https://nearshore-it.eu/articles/data-mesh-vs-data-fabric-a-guide-to-better-data-management/">Data-Driven Decision Making</a></li>



<li><a href="https://nearshore-it.eu/articles/what-is-data-quality/">What is Data Quality?</a></li>
</ul>



<h2 class="wp-block-heading" id="Data-Mesh-integration-vs.-Data-Lake-Best-Practices">Data Mesh integration vs. Data Lake Best Practices</h2>



<p><strong>Data Integration in Data Lake architecture</strong>&nbsp;</p>



<p>In Data Lake, data integration typically involves centralizing data from different sources into a single repository. This process can be complex, especially for data from different systems that use different formats, schemas, and protocols. Best practices for data integration in Data Lake include:&nbsp;</p>



<ul class="wp-block-list">
<li><strong>ETL/ELT Processes:</strong>&nbsp; Extracting, Transforming, Loading (ETL) or Extracting, Loading, Transforming (ELT) are critical in cleaning, transforming, and organizing the data being transferred to Data Lake. Properly designed ETL flows allow for usability and consistency of data.&nbsp;</li>



<li><strong>Schema-on-Read:</strong> Unlike traditional databases, Data Lake often uses the “Schema-on-Read” approach, which means that data is stored in raw form and transformed into the required schema during a read or query. This provides flexibility, efficiency and usability, but involves the need for thorough design.&nbsp;</li>



<li><strong>Data Catalog:</strong> Implementing a data catalog is essential to managing the massive volumes of data in Data Lake. Data Catalog&nbsp; helps users to discover, understand, and trust the data being shared for analysis.&nbsp;</li>



<li><strong>Data management:</strong> To maintain data quality, security, and compliance in a centralized environment, a robust governance framework is essential. This includes metadata management, access control, and audit mechanisms.&nbsp;</li>
</ul>



<p><strong>Data Integration in Data Mesh architecture</strong>&nbsp;</p>



<p>Data integration in the Data Mesh architecture is more decentralized and domain-based. Each business domain is responsible for integrating its own data to create standalone, interoperable data products. &nbsp;<br>Best practices for data integration in Data Mesh include:&nbsp;</p>



<ul class="wp-block-list">
<li><strong>Domain-Driven Design:</strong> Data integration processes should be tailored to the specific needs of each domain. This includes designing data flows that are consistent with the workflows and data models within a given domain.&nbsp;</li>



<li><strong>APIs and Interfaces:</strong> To ensure interoperability between domains, data products should be made available through well-defined APIs or data interfaces. This makes it easier to share data and integrate it across organization.&nbsp;</li>



<li><strong>Federated Governance:</strong> While domains are responsible for their own data, this governance model ensures that integration standards, data quality, and compliance requirements are met across all domains. This often involves the need for a central management team to work closely with domain teams to define and enforce standards.&nbsp;</li>



<li><strong>Event-Driven Architecture:</strong> Data Mesh often uses the Event-Driven architecture, where changes in one domain&#8217;s data can cause updates in another. This keeps your data in sync and up-to-date across the organization.&nbsp;</li>
</ul>



<h2 class="wp-block-heading" id="What-is-the-difference-between-Data-Mesh-vs-Data-Lake-data-products?">What is the difference between Data Mesh vs Data Lake data products?</h2>



<h3 class="wp-block-heading">What is a data product&nbsp;</h3>



<p>A data product is a resource created from data that brings value to users, usually by providing practical insights, enabling decision making or automating processes. Unlike raw data, which is often unstructured and not immediately useful, data products are designed to be used directly by end users or applications.&nbsp;</p>



<h3 class="wp-block-heading">Data Products in Data Lake&nbsp;</h3>



<p>In a Data Lake architecture, data products are typically created and maintained by a centralized data team. These products often come from raw data stored in Data Lake, which is transformed and processed to meet the needs of different business units. The responsible team manages the entire lifecycle of these data products centrally, from their taking them for processing, storage, and eventual retrieval by users.&nbsp;</p>



<p>While this centralized approach can ensure consistency and uniformity of data across the organization, it can also lead to delays in data delivery. A centralized team may lack the domain expertise required to create highly specialized data products, leading to generic, non-specialized solutions that may not fully meet the needs of specific business units.&nbsp;</p>



<h3 class="wp-block-heading">Data Products in Data Mesh&nbsp;</h3>



<p>Data Mesh treats data as a product from the very beginning. Each domain is responsible for creating and maintaining its own data products. These products are designed with specific users and use cases in mind, making them more useful for users and the business unit that created them.&nbsp;</p>



<p>This decentralized approach allows for more flexibility and innovation as domain teams can quickly change their data products to meet changing business needs. However, it also requires robust governance and cross-domain collaboration to ensure the interoperability of data products and enable the overall needs of the organization to be met.&nbsp;</p>



<p>The key difference lies in the approach to design and ownership. In Data Lake, data products are often reactive and standardized, created after data collection. In Data Mesh, data products are proactive and specialized, designed as part of the data lifecycle in the domain.&nbsp;</p>



<h2 class="wp-block-heading" id="When-to-choose-Data-Lake">When to choose Data Lake</h2>



<p>Data Lake is a good choice for organizations that:&nbsp;</p>



<ul class="wp-block-list">
<li>Need a centralized repository of all their data, especially in the case of big amounts of various data, the so-called &#8220;single source of truth&#8221;.&nbsp;</li>



<li>Use centralized teams of data scientists or analysts who need access to raw data for exploratory data analysis and the use of Machine Learning.&nbsp;</li>



<li>Prefer a more traditional, centralized approach to data management.&nbsp;</li>



<li>Have an experienced data management team that is able to handle the complexity of Data Lake without falling into the “data swamp” trap.&nbsp;</li>
</ul>



<p><strong>Also read: </strong></p>



<ul class="wp-block-list">
<li><a href="https://nearshore-it.eu/articles/data-mesh-vs-data-fabric-a-guide-to-better-data-management/">Data Mesh vs Data Fabric</a></li>



<li><a href="https://nearshore-it.eu/articles/5-popular-business-intelligence-tools-2/" target="_blank" rel="noreferrer noopener">5 most popular data analytics tools</a> </li>
</ul>



<h2 class="wp-block-heading" id="When-to-choose-Data-Mesh">When to choose Data Mesh</h2>



<p>The Data Mesh architecture may be more appropriate for organizations that:&nbsp;</p>



<ul class="wp-block-list">
<li>Have a complex organizational structure with multiple domains or business units that require autonomy in data management.&nbsp;</li>



<li>Strive to enable domain teams to take responsibility for their data, adapting their management to business needs.&nbsp;</li>



<li>Are looking for more flexibility and responsiveness in their data management practices.&nbsp;</li>



<li>Have the right resources and organizational culture to handle the transition to decentralized data management.&nbsp;</li>



<li>Need for high quality and specialized data products.&nbsp;</li>
</ul>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td></style><div class="promotion-box promotion-box--image-left "><div class="tiles latest-news-once"><div class="tile"><div class="tile-image"><img decoding="async" src="https://nearshore-it.eu/wp-content/uploads/2024/06/BigCTA_MarekCzachorowski.jpg" alt="BigCTA MarekCzachorowski" title="Data Mesh vs Data Lake: Choosing the Perfect Data Architecture for Your Business  28"></div><div class="tile-content"><p class="entry-title client-name promotion-box__headline2">Elevate Your Data Strategy</p>
<p class="promotion-box__description2">Our customized Data solutions align with your business objectives. Consult with <strong>Marek Czachorowski</strong>, Head of Data and AI Solutions, for expert guidance.</p>
<a class="btn btn-primary booking" href="https://outlook.office365.com/book/BookameetingwithMarek@gfi.fr/" target="_blank" rel="noopener">Schedule a meeting</a></div></div></div></div></td></tr></tbody></table></figure>



<h2 class="wp-block-heading">Summary&nbsp;</h2>



<p>Data Lake is a traditional approach that allows for storing data in a centralized location. It is characterized by scalability but comes with certain challenges in data governance. Data Mesh is a decentralized approach in which data owners from various data domains are responsible for specific data, and its quality, making it available for data consumers from other domains. &nbsp;</p>



<p>The choice between Data Lake and Data Mesh depends mostly on the structure of the organization, its needs, and its readiness for change. Data Lake offers a centralized, flexible, and cost-effective solution for storing large amounts of various data. However,&nbsp; good data management is required. Data Mesh, although more complex and requiring significant changes in the culture of the organization, offers a decentralized approach that better adapts data management to business needs and allows for greater flexibility and scalability.&nbsp;&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://nearshore-it.eu/articles/data-mesh-vs-data-lake/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Self-service BI: What it is? Applications, Benefits, Risks, and Future with AI</title>
		<link>https://nearshore-it.eu/articles/self-service-bi/</link>
					<comments>https://nearshore-it.eu/articles/self-service-bi/#respond</comments>
		
		<dc:creator><![CDATA[Marcin Prys]]></dc:creator>
		<pubDate>Wed, 21 Aug 2024 07:53:13 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[Technologies]]></category>
		<category><![CDATA[Business Intelligence]]></category>
		<category><![CDATA[Data]]></category>
		<guid isPermaLink="false">https://nearshore-it.eu/?p=28589</guid>

					<description><![CDATA[Read our article to find out what makes it different from the traditional approach, how to implement SSBI and what are the possible software solutions for your business.]]></description>
										<content:encoded><![CDATA[
<h3 class="wp-block-heading">Introduction to Self-Service Business Intelligence</h3>



<p>Self-Service Business Intelligence (SSBI) is a modern approach that allows end-users to analyze data and generate reports without the need to involve IT specialists or data analysts. It is designed for business team members who do not have advanced technical skills but need access to data to make informed business decisions. Read our article to find out what makes it different from the traditional approach, how to implement SSBI and what are the possible software solutions for your business.</p>



<div class="table-of-contents">
    <p class="title"></p>
    <ol>
                    <li><a href="#Traditional-BI-and-self-service-BI---what-is-the-difference?">1.  Traditional BI and self-service BI &#8211; what is the difference?</a></li>
                    <li><a href="#Benefits-of-Self-Service-BI">2.  Benefits of Self-Service BI</a></li>
                    <li><a href="#Risks-of-Self-Service-Business-Intelligence">3.  Risks of Self-Service Business Intelligence</a></li>
                    <li><a href="#Implementation-of-Self-Service-Business-Intelligence-and-BI-best-practices">4.  Implementation of Self-Service Business Intelligence and BI best practices</a></li>
                    <li><a href="#Examples-of-Self-Service-BI-tools">5.  Examples of Self-Service BI tools </a></li>
                    <li><a href="#The-future-of-AI-powered-BI-and-self-service-BI">6.  The future of AI-powered BI and self-service BI</a></li>
                    <li><a href="#BI-strategy-for-enhanced-analytics">7.  BI strategy for enhanced analytics</a></li>
                    <li><a href="#Summary">8.  Summary </a></li>
            </ol>
</div>


<h2 class="wp-block-heading" id="Traditional-BI-and-self-service-BI---what-is-the-difference?">Traditional BI and self-service BI &#8211; what is the difference?</h2>



<p>The traditional approach in reporting means that users order reports that the IT department creates. It&#8217;s engaging and quite time-consuming, especially if reports are needed right away.</p>



<p>Modern BI solutions allow you to gain insights from a variety of data sources, and access data for self-service analytics. Using self-service is a convenient way to ease the burden on business analyst departments. However &#8211; I advise against being over-enthusiastic. It&#8217;s worth bearing in mind that a self-service BI tool doesn&#8217;t do everything by itself. To prepare more advanced analyses and reports, you need to involve BI specialists or get training, for example, under the guidance of a solution provider.&nbsp;</p>



<h3 class="wp-block-heading">What is Self-Service BI?</h3>



<p>Self-Service Business Intelligence is a set of tools and technologies that enable business users to independently collect, process, and analyze data. With SSBI, one can create their own reports, dashboards, and data visualizations, which speeds up the decision-making process. These tools are typically user-friendly and offer intuitive interfaces, minimizing the need for technical support.&nbsp;</p>



<p><strong>Also read: </strong></p>



<ul class="wp-block-list">
<li><a href="https://nearshore-it.eu/articles/what-is-data-quality/">Data Quality Importance in Data Governance</a></li>



<li><a href="https://nearshore-it.eu/articles/data-driven-decision-making/" data-type="link" data-id="https://nearshore-it.eu/articles/data-driven-decision-making/">Data-Driven Decision Making</a></li>



<li><a href="https://nearshore-it.eu/articles/data-mesh-vs-data-lake/">Data Mesh vs Data Lake</a></li>
</ul>



<h3 class="wp-block-heading">Applications of Self-Service Business Intelligence</h3>



<p>SSBI is used in many fields, such as:</p>



<ul class="wp-block-list">
<li><strong>Marketing:</strong> Analyzing marketing campaigns, customer segmentation, monitoring ROI.</li>



<li><strong>Finance:</strong> Cost analysis, revenue forecasting, monitoring financial performance.</li>



<li><strong>Sales:</strong> Monitoring sales results, analyzing customer behaviors, predicting market trends.</li>



<li><strong>Operations:</strong> Optimizing processes, monitoring operational performance, analyzing supply chains.</li>



<li><strong>HR:</strong> Analyzing personal data, monitoring employment metrics, assessing employee performance.</li>
</ul>



<h2 class="wp-block-heading" id="Benefits-of-Self-Service-BI">Benefits of Self-Service BI</h2>



<p>SSBI offers many benefits, including:</p>



<ul class="wp-block-list">
<li><strong>Faster decision-making:</strong> Users have immediate access to the necessary data, allowing for quicker decisions.</li>



<li><strong>Cost reduction:</strong> Less involvement of IT teams in creating reports and analyzing data.</li>



<li><strong>Greater flexibility:</strong> Users can customize reports and analyses to meet their individual needs.</li>



<li><strong>Improved data quality:</strong> Users can continuously verify and correct data, which leads to higher-quality information.</li>



<li><strong>Increased innovation:</strong> Access to SSBI tools stimulates creativity and innovation in data analysis.</li>
</ul>



<h2 class="wp-block-heading" id="Risks-of-Self-Service-Business-Intelligence">Risks of Self-Service Business Intelligence</h2>



<p>Despite many benefits, SSBI also involves certain risks:</p>



<ul class="wp-block-list">
<li><strong>Data security:</strong> Lack of proper security measures can lead to data leaks or unauthorized access.</li>



<li><strong>Data quality:</strong> Users may not have the necessary skills to correctly interpret data, leading to erroneous conclusions.</li>



<li><strong>Improper use of tools:</strong> Without proper training, users may use SSBI tools incorrectly.</li>



<li><strong>Information overload:</strong> Excessive data can lead to difficulties in analysis and interpretation.</li>
</ul>



<h2 class="wp-block-heading" id="Implementation-of-Self-Service-Business-Intelligence-and-BI-best-practices">Implementation of Self-Service Business Intelligence and BI best practices</h2>



<p>The implementation process of SSBI requires a thoughtful approach, encompassing several key stages:</p>



<ol class="wp-block-list">
<li><strong>Data preparation:</strong> Identifying and preparing data that will be easy to analyze is crucial. Data must be accurate, consistent, and up-to-date. The best solution is a central data repository that users can access. Popular tools like Power BI can use this prepared data, facilitating analysis.</li>



<li><strong>Selection of SSBI tools:</strong> Many SSBI tools are available on the market, such as Power BI, Tableau, or Qlik. It is important to choose a tool that best meets the organization&#8217;s needs and is easy for end-users to use.</li>



<li><strong>User training:</strong> To effectively use SSBI tools, users need to be trained. Training should cover the basics of using tools, creating reports and dashboards, and data interpretation. This way, they will be able to prepare reports and analyses themselves without waiting for further support.</li>



<li><strong>Monitoring and support:</strong> After implementing the SSBI tool, it is important to monitor its use and provide support to users. Regular data updates, training sessions, and technical support help maintain the high quality and efficiency of SSBI tools</li>
</ol>


</style><div class="promotion-box promotion-box--image-left "><div class="tiles latest-news-once"><div class="tile"><div class="tile-image"><img decoding="async" src="https://nearshore-it.eu/wp-content/uploads/2024/06/BigCTA_MarekCzachorowski.jpg" alt="BigCTA MarekCzachorowski" title="Self-service BI: What it is? Applications, Benefits, Risks, and Future with AI 29"></div><div class="tile-content"><p class="entry-title client-name promotion-box__headline2">Elevate Your Data Strategy</p>
<p class="promotion-box__description2">Our customized Data solutions align with your business objectives. Consult with <strong>Marek Czachorowski</strong>, Head of Data and AI Solutions, for expert guidance.</p>
<a class="btn btn-primary booking" href="https://outlook.office365.com/book/BookameetingwithMarek@gfi.fr/" target="_blank" rel="noopener">Schedule a meeting</a></div></div></div></div>



<h2 class="wp-block-heading" id="Examples-of-Self-Service-BI-tools">Examples of Self-Service BI tools</h2>



<p>Some traditional BI tools include complex BI software that requires specialized knowledge to operate, while self-service BI platforms are designed to be easy to use for non-technical users. These self-service BI tools provide a way for users to access and analyze business data without engaging BI specialists. Below we present the list of popular Self-Service tools for data analytics &#8211; get to know those <a href="https://nearshore-it.eu/articles/5-popular-business-intelligence-tools-2/">BI tools capabilities </a>in detail to choose the best self-service tool.</p>



<h3 class="wp-block-heading">Tableau</h3>



<p>Tableau is a self-service tool with a drag-and-drop function that enables users to easily create and manage visualizations and dashboards. The system is popular thanks to its simple interphase and intuitiveness.</p>



<h3 class="wp-block-heading">Microsoft Power BI</h3>



<p>Microsoft Power BI is integrated with the Azure cloud platform. This solution can be easily handled by anyone. Power BI comes with modern reports and dashboards made available on all devices. Data for reports is collected from various sources to create multidimensional data models, making it possible to conduct analyses in real time.&nbsp;</p>



<h3 class="wp-block-heading">Qlik&nbsp;</h3>



<p>Qlik analytics engine allows users to explore data and easily create sharable reports. Qlik Sense is a powerful platform that, thanks to AI and ML&nbsp; capabilities, allows business users to use the full potential of the data.</p>



<p><strong>Read more:</strong></p>



<ul class="wp-block-list">
<li><a href="https://nearshore-it.eu/articles/xploring-the-power-of-qlik-nprinting/" data-type="link" data-id="https://nearshore-it.eu/articles/xploring-the-power-of-qlik-nprinting/">Exploring the power of Qlik NPrinting</a></li>



<li><a href="https://nearshore-it.eu/articles/qlik-talend-the-future-of-data/">Qlik + Talend – the future of data integration</a></li>
</ul>



<h2 class="wp-block-heading" id="The-future-of-AI-powered-BI-and-self-service-BI">The future of AI-powered BI and self-service BI</h2>



<p>The future of SSBI looks promising, especially in the context of integration with Artificial Intelligence (AI) technologies. AI can significantly improve the functionality of SSBI tools by:</p>



<ul class="wp-block-list">
<li><strong>Automation of analyses:</strong> AI can automatically analyze data and generate reports, further speeding up the decision-making process.</li>



<li><strong>Predictive analyses:</strong> AI can predict future trends and behaviors based on historical data, allowing for better planning and strategy.</li>



<li><strong>Personalization:</strong> AI can adapt SSBI tools to individual user needs, offering personalized ad hoc analysis and reports.</li>



<li><strong>Pattern recognition:</strong> AI can identify hidden patterns and correlations in data that are difficult for humans to detect.</li>
</ul>



<p><strong>Also read:</strong></p>



<ul class="wp-block-list">
<li><a href="https://nearshore-it.eu/articles/what-is-data-quality/" data-type="link" data-id="https://nearshore-it.eu/articles/what-is-data-quality/">What is Data Quality? Data Management Best Practices &amp; More</a></li>



<li><a href="https://nearshore-it.eu/articles/data-mesh-vs-data-fabric-a-guide-to-better-data-management/">Data Mesh vs Data Fabric</a></li>



<li><a href="https://nearshore-it.eu/technologies/data-warehouse-architecture/">Data Warehouses management</a></li>
</ul>



<h2 class="wp-block-heading" id="BI-strategy-for-enhanced-analytics">BI strategy for enhanced analytics</h2>



<p>Choosing the SSBI tool alone is not enough. To become <a href="https://nearshore-it.eu/articles/data-driven-managment/">a data-driven organization</a>, you need a well-thought-out strategy. Which data will be analyzed? This is where the Master Data Management approach will help. Who will be responsible for the data project? Are you going to conduct it alone or in partnership with a Data Management services provider?&nbsp;</p>



<p>Implementation of the self-service bi strategy should include the following steps:&nbsp;</p>



<p>&#8211; <strong>Selecting a Chief Data Officer (CDO)</strong> to support the initiative. &nbsp;<br>&#8211; <strong>Choosing a platform</strong> &#8211; if you are not sure which tool will meet your expectations, decide to work with a partner company.&nbsp;<br>&#8211;<strong> Engaging users</strong> &#8211; representatives of different departments should support the implementation plan and be involved at every stage of the project.&nbsp;<br>&#8211; <strong>Creating a Business Intelligence </strong>team. Typically, a BI team consists of:<br>small teams: analyst, developer, tester. Larger project teams: Release manager, administrator, team manager.&nbsp;&nbsp;</p>



<h2 class="wp-block-heading" id="Summary">&nbsp;Summary</h2>



<p>Self-Service Business Intelligence is a powerful tool that allows end-users to independently collect, process, and analyze data. While SSBI involves certain risks, the benefits of its application are significant. The future of SSBI, combined with AI technologies, seems even more promising, opening new possibilities in data analysis and business decision-making.</p>


</style><div class="promotion-box promotion-box--image-left "><div class="tiles latest-news-once"><div class="tile"><div class="tile-image"><img decoding="async" src="https://nearshore-it.eu/wp-content/uploads/2024/06/BigCTA_MarekCzachorowski.jpg" alt="BigCTA MarekCzachorowski" title="Self-service BI: What it is? Applications, Benefits, Risks, and Future with AI 29"></div><div class="tile-content"><p class="entry-title client-name promotion-box__headline2">Elevate Your Data Strategy</p>
<p class="promotion-box__description2">Our customized Data solutions align with your business objectives. Consult with <strong>Marek Czachorowski</strong>, Head of Data and AI Solutions, for expert guidance.</p>
<a class="btn btn-primary booking" href="https://outlook.office365.com/book/BookameetingwithMarek@gfi.fr/" target="_blank" rel="noopener">Schedule a meeting</a></div></div></div></div>
]]></content:encoded>
					
					<wfw:commentRss>https://nearshore-it.eu/articles/self-service-bi/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AI and IoT: Exploring the synergy between Artificial Intelligence and Internet of Things</title>
		<link>https://nearshore-it.eu/articles/how-ai-and-iot-work-together/</link>
					<comments>https://nearshore-it.eu/articles/how-ai-and-iot-work-together/#respond</comments>
		
		<dc:creator><![CDATA[-- Nie pokazuj autora --]]></dc:creator>
		<pubDate>Tue, 06 Aug 2024 07:06:40 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[Technologies]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[IoT]]></category>
		<guid isPermaLink="false">https://nearshore-it.eu/?p=28375</guid>

					<description><![CDATA[Read our article and learn about successful examples of AI Implementation in IoT projects. Is AI still in IoT, or can we already talk about Artificial Intelligence of Things?]]></description>
										<content:encoded><![CDATA[
<p>AI and IoT are a pair of technologies shaping our interconnected world&#8217;s future. The result is a synergy between AI-powered IoT devices that can enhance business operations across many industries, allowing companies to make informed decisions or even prevent machinery failures. Read our article and learn about successful examples of AI Implementation in IoT projects. Is AI still in IoT, or can we already talk about Artificial Intelligence of Things?</p>



<div class="table-of-contents">
    <p class="title"></p>
    <ol>
                    <li><a href="#AI-enabled-IoT.-Key-statistics-on-AI-implementations">1.  AI-enabled IoT. Key statistics on AI implementations</a></li>
                    <li><a href="#Future-of-AI-powered-IoT-applications">2.  Future of AI-powered IoT applications</a></li>
                    <li><a href="#Integration-of-AI-and-IoT-for-operational-efficiency">3.  Integration of AI and IoT for operational efficiency</a></li>
                    <li><a href="#Real-world-examples-of-AI-and-IoT-implementation">4.  Real-world examples of AI and IoT implementation</a></li>
                    <li><a href="#How-does-AI-improve-connectivity-and-decision-making-in-IoT-systems?">5.  How does AI improve connectivity and decision-making in IoT systems?</a></li>
                    <li><a href="#Machine-Learning-in-IoT">6.  Machine Learning in IoT</a></li>
                    <li><a href="#What-are-the-benefits-of-AI-and-IoT?">7.  What are the benefits of AI and IoT?</a></li>
                    <li><a href="#AI-in-IoT-or-Artificial-Intelligence-of-Things?">8.  AI in IoT or Artificial Intelligence of Things?</a></li>
                    <li><a href="#IoT-and-AI-trends-will-become-increasingly-important">9.  IoT and AI trends will become increasingly important</a></li>
            </ol>
</div>


<h2 class="wp-block-heading" id="AI-enabled-IoT.-Key-statistics-on-AI-implementations">AI-enabled IoT. Key statistics on AI implementations</h2>



<p>IoT – this technology is not just about smart cities, smart homes, and wearable gadgets. It is a transformative industrial advantage. In recent years, IoT has been quite hype, and we know the hype curve is high right now. IoT and AI increasingly often join forces to collect data and enable IoT devices to use their maximum potential. According to IoT Analytics Research, in 2022, only 17% of Artificial Intelligence solutions were implemented in IoT.  </p>



<h2 class="wp-block-heading" id="Future-of-AI-powered-IoT-applications">Future of AI-powered IoT applications</h2>



<p>However, this state is expected to change by 2027 when nearly <strong>50% of IoT solutions</strong> are predicted to have an AI component, <strong>with 13% being AI-based and 43% AI-augmented.</strong> That shows the real power of AI within Internet of Things solutions.&nbsp;&nbsp;&nbsp;</p>



<h2 class="wp-block-heading" id="Integration-of-AI-and-IoT-for-operational-efficiency">Integration of AI and IoT for operational efficiency</h2>



<h3 class="wp-block-heading">Why do companies decide to integrate AI into the Internet of Things?&nbsp;&nbsp;</h3>



<p>So where does the huge popularity of combining AI and IoT take its roots?&nbsp;&nbsp;&nbsp;</p>



<p>First, it is all about convenience and seeking ways to expedite operations. Humans naturally seek efficiency and convenience, often preferring to avoid performing repetitive and laborious tasks ourselves.&nbsp;&nbsp;</p>



<p>Consequently, we delegate these tasks to computers, leveraging automation to handle them. By enabling automated systems to manage and control various processes, we can ensure that once we detect an issue or an opportunity, it can be addressed promptly and effectively by these systems.&nbsp;&nbsp;&nbsp;</p>



<h3 class="wp-block-heading">Now it is time to use AI&nbsp;&nbsp;</h3>



<p>Once we can control our connected devices, why not leave it to the computer to – using AI and Machine Learning techniques – analyze data generated by IoT devices and assess if something wrong is going on with those smart devices? This allows us to take appropriate actions. This way, IoT and AI-enabled data analysis ensure operational excellence.&nbsp;&nbsp;&nbsp;</p>



<h2 class="wp-block-heading" id="Real-world-examples-of-AI-and-IoT-implementation">Real-world examples of AI and IoT implementation</h2>



<p>Many companies that have decided to integrate AI into their IoT environment can already boast interesting solutions. Our client is <strong>a leader in green energy and wind turbine manufacturing.</strong>&nbsp;&nbsp; They faced the challenge of <strong>operating thousands of wind turbines worldwide with only a few operators.</strong>&nbsp;&nbsp;</p>



<h3 class="wp-block-heading">Their goals and challenges&nbsp;&nbsp;&nbsp;</h3>



<ul class="wp-block-list">
<li>Centralized maintenance of worldwide wind fleet&nbsp;&nbsp;&nbsp;</li>



<li>Different technologies (some 25-years old)&nbsp;&nbsp;&nbsp;</li>



<li>Operator learning period and rotation&nbsp;&nbsp;&nbsp;&nbsp;</li>



<li>Number of devices and alarms to handle&nbsp;&nbsp;&nbsp;</li>
</ul>



<h3 class="wp-block-heading" id="How-did-they-prepare-for-their-AI-with-IoT-journey:">How did they prepare for their AI with IoT journey:</h3>



<ul class="wp-block-list">
<li>Up to 300 built-in sensors&nbsp;&nbsp;&nbsp;</li>



<li>Different communication options – wired and wireless (mostly cellular)&nbsp;&nbsp;&nbsp;&nbsp;</li>



<li>Data delivery from periodical SMTP to real-time OPC-UA&nbsp;&nbsp;&nbsp;</li>
</ul>



<h3 class="wp-block-heading">Artificial Intelligence in IoT solution&nbsp;&nbsp;</h3>



<p>For the client, the AI solution turned out to be the best one. It was based on sophisticated conditional rules working on a streaming analytics engine where the incoming data was compared with the enterprise data. This resulted in many alerts. So, to address that with a limited workforce, streaming analytics and integration with the enterprise systems (SAP and ServiceNow) were used. The simpler alerts of non-critical severity were managed automatically. By creating the proper work orders for the proper maintenance, human operators were relieved and could handle critical issues.&nbsp;&nbsp;&nbsp;</p>



<p>Once the problem is resolved in the field, the status is automatically synchronized back from SAP systems to the IoT platform and the dashboard. This way, the operators know that the problems are resolved. With an autonomous approach, we reach very high productivity of the maintenance and operations teams.</p>



<figure class="wp-block-table"><table><tbody><tr><td><br></style><div class="promotion-box promotion-box--image-left "><div class="tiles latest-news-once"><div class="tile"><div class="tile-image"><img decoding="async" src="https://nearshore-it.eu/wp-content/uploads/2024/06/BigCTA_AndrzejGumieniak.jpg" alt="BigCTA AndrzejGumieniak" title="AI and IoT: Exploring the synergy between Artificial Intelligence and Internet of Things 31"></div><div class="tile-content"><p class="entry-title client-name promotion-box__headline2">Streamline Your IoT Operations</p>
<p class="promotion-box__description2"><strong>Andrzej Gumieniak</strong>, our Head of Practice IoT, is here to help you navigate the complexities of IoT solutions. Book a consultation to discuss your case.</p>
<a class="btn btn-primary booking" href="https://outlook.office365.com/book/BookameetingwithAndrzej@gfi.fr/" target="_blank" rel="noopener">Schedule a meeting</a></div></div></div></div></td></tr></tbody></table></figure>



<h2 class="wp-block-heading" id="How-does-AI-improve-connectivity-and-decision-making-in-IoT-systems?">How does AI improve connectivity and decision-making in IoT systems?</h2>



<p>For that, we can take advantage of Artificial Intelligence model assistance, pre-trained for specific use cases. This way we do not need to build up sophisticated conditional rules based on many parameters inside our business logic as AI supports us with certain routine tasks. That leads to an especially important and useful application: prescriptive maintenance. This maintenance strategy takes advantage of machine data to outline maintenance-related tasks.&nbsp; &nbsp;</p>



<p>Also read: <a href="https://nearshore-it.eu/articles/predictive-maintenance-systems/" target="_blank" rel="noreferrer noopener">Predictive maintenance</a>&nbsp;&nbsp;&nbsp;</p>



<h2 class="wp-block-heading" id="Machine-Learning-in-IoT">Machine Learning in IoT</h2>



<p>Machine learning (ML) is playing a significant role in the Internet of Things (IoT) ecosystem. By leveraging AI and ML algorithms, companies can extract valuable insights from the vast amount of data generated by IoT sensors. Nowadays you can use IoT platforms (e.g. Cumulocity IoT) that have built-in ML components that make<strong> </strong>Machine Learning model training and implementation easier and quicker.&nbsp;&nbsp;&nbsp;</p>



<h3 class="wp-block-heading">AI in IoT streamlines data analytics</h3>



<p>Using the incoming data, you can process it through analytics and predictive Machine Learning models. You no longer need to create alerts or work orders manually. Instead, you can utilize an AI advisor or a more sophisticated rules engine, taking advantage of contextual data to support issue resolution.&nbsp;&nbsp;</p>



<h3 class="wp-block-heading">IoT and AI for better alarm monitoring&nbsp;&nbsp;</h3>



<p>AI-enabled automation eliminates the need for operators to sift through thousands of alarms. Operators can then focus on the most challenging cases, while simple and obvious ones can be managed automatically by the central IoT platform.&nbsp;&nbsp;&nbsp;</p>



<h2 class="wp-block-heading" id="What-are-the-benefits-of-AI-and-IoT?">What are the benefits of AI and IoT?</h2>



<ul class="wp-block-list">
<li><strong>Faster issue resolution</strong> – data collected from the IoT sensors is analyzed in the instance by AI, which allows for quicker issue detection and resolution.&nbsp;&nbsp;</li>



<li><strong>Improved security </strong>– IoT security is considered one of the biggest challenges and concerns. AI algorithms can monitor access to IoT networks and detect suspected activities.&nbsp;&nbsp;&nbsp;</li>



<li><strong>Predictive &amp; prescriptive maintenance capabilities for preventing downtime </strong>– by employing cognitive algorithms used in IoT, you can prescribe actions and instruct the maintenance staff on what to do or even issue the appropriate commands to the team automatically.&nbsp; &nbsp;</li>
</ul>



<p>Also read: <a href="https://nearshore-it.eu/articles/top-5-iot-challenges-in-2024/" target="_blank" rel="noreferrer noopener">The top challenges in IoT</a>&nbsp;</p>



<h2 class="wp-block-heading" id="AI-in-IoT-or-Artificial-Intelligence-of-Things?">AI in IoT or Artificial Intelligence of Things?</h2>



<p>Artificial intelligence in itself is a powerful tool. When it becomes the heart of an organism such as a network of connected IoT devices, you get an intelligent system capable of learning and making decisions. In this context, there is increasing talk of<strong> AIoT (Artificial Intelligence of Things) </strong>technology, in which IoT devices talk to each other and act independently of human intelligence.&nbsp;&nbsp;</p>



<h2 class="wp-block-heading" id="IoT-and-AI-trends-will-become-increasingly-important">IoT and AI trends will become increasingly important</h2>



<p>IoT applications increasingly leverage AI algorithms to analyze the vast amounts of data generated by industrial IoT devices. The combination of AI and IoT technologies allows for real-time data processing and the ability to make decisions based on the insights gathered in many industries now, including logistics, automotive, and manufacturing. Overall, the benefits of AI in IoT are vast and continue to expand as AI and IoT work together. We can expect this trend to gather pace, as the number of connected devices worldwide in 2025 is forecasted to hit 25 bn, according to <a href="https://iot-analytics.com/number-connected-iot-devices/" target="_blank" rel="noreferrer noopener">IoT Analytics.</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://nearshore-it.eu/articles/how-ai-and-iot-work-together/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
