<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	 xmlns:media="http://search.yahoo.com/mrss/" >

<channel>
	<title>Cloud engineering &#8211; Nearshore Software Development Company &#8211; IT Outsourcing Services</title>
	<atom:link href="https://nearshore-it.eu/tag/cloud/feed/" rel="self" type="application/rss+xml" />
	<link>https://nearshore-it.eu</link>
	<description>We are Nearshore Software Development Company with 14years of experience in delivering a large scale IT projects in the areas of PHP, JAVA, .NET, BI and MDM.</description>
	<lastBuildDate>Mon, 10 Mar 2025 12:46:06 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.3</generator>

 
	<item>
		<title>FinOps: smart strategies for cloud cost control</title>
		<link>https://nearshore-it.eu/webinars/finops-smart-strategies-for-cloud-cost-control/</link>
					<comments>https://nearshore-it.eu/webinars/finops-smart-strategies-for-cloud-cost-control/#respond</comments>
		
		<dc:creator><![CDATA[Piotr]]></dc:creator>
		<pubDate>Fri, 29 Nov 2024 08:49:52 +0000</pubDate>
				<category><![CDATA[Webinars]]></category>
		<category><![CDATA[Cloud engineering]]></category>
		<guid isPermaLink="false">https://nearshore-it.eu/?p=34164</guid>

					<description><![CDATA[Is your organization having difficulty managing cloud costs? You're not alone. Nearly 60% of companies face similar challenges, including communication issues, inaccurate forecasts, and sticking to budgets when it comes to cloud spending. How to achieve that?]]></description>
										<content:encoded><![CDATA[
<p>Watch our free webinar where Dawid Janasik will dive into proven FinOps strategies to help your organization get control over cloud expenses and drive financial success. </p>



<ul class="wp-block-list">
<li><strong>Discover best practices: </strong>Get a comprehensive understanding of how FinOps works to control cloud costs and improve budgeting accuracy.</li>



<li><strong>Learn from our expert: </strong>who will share actionable strategies and real-world examples to help you apply FinOps in your organization.</li>



<li><strong>Boost accountability and efficiency: </strong>Understand how to drive collaboration and ownership in managing cloud resources effectively.</li>
</ul>



<h2 class="wp-block-heading has-text-align-center">To access the video, please fill in the form</h2>



<div style="height:50px" aria-hidden="true" class="wp-block-spacer"></div>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow">
<h3 class="wp-block-heading">Agenda:</h3>



<ol class="wp-block-list">
<li><strong>Kick-off and overview of FinOps.</strong></li>



<li><strong>Balancing Predictability and Flexibility of Cloud: </strong>How to manage cloud costs while staying agile.</li>



<li><strong>What is FinOps?</strong> Understanding the FinOps framework and its value for your organization.</li>



<li><strong>Collaboration is Key: </strong>The importance of a central FinOps team in driving cross-functional alignment.</li>



<li><strong>Business Value Drives Cloud Decision-Making:</strong> Aligning cloud spending with strategic business goals.</li>



<li><strong>Taking Ownership of Cloud Usage: </strong>Tips on fostering accountability for cloud usage across departments.</li>



<li><strong>Accessible and Timely Reporting: </strong>Best practices for real-time visibility and actionable reporting.</li>



<li><strong>Leveraging the Variable Cost Model of the Cloud: </strong>Making the most of cloud’s pay-as-you-go model for cost efficiency.</li>



<li><strong>Q&amp;A Session</strong></li>
</ol>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow">[contact-form-7]
</div>
</div>



<div style="height:50px" aria-hidden="true" class="wp-block-spacer"></div>



<h3 class="wp-block-heading">Speakers</h3>



<div class="wp-block-media-text is-stacked-on-mobile" style="grid-template-columns:15% auto"><figure class="wp-block-media-text__media"><img decoding="async" width="200" height="200" src="https://nearshore-it.eu/wp-content/uploads/2024/09/dawid-janasik.png" alt="dawid janasik" class="wp-image-32259 size-full" title="FinOps: smart strategies for cloud cost control 1" srcset="https://nearshore-it.eu/wp-content/uploads/2024/09/dawid-janasik.png 200w, https://nearshore-it.eu/wp-content/uploads/2024/09/dawid-janasik-150x150.png 150w" sizes="(max-width: 200px) 100vw, 200px" /></figure><div class="wp-block-media-text__content">
<p><strong>Dawid Janasik</strong><br>Senior DevOps engineer / Technical Leader</p>
</div></div>



<div style="height:25px" aria-hidden="true" class="wp-block-spacer"></div>



<div class="wp-block-media-text is-stacked-on-mobile" style="grid-template-columns:15% auto"><figure class="wp-block-media-text__media"><img fetchpriority="high" decoding="async" width="400" height="400" src="https://nearshore-it.eu/wp-content/uploads/2024/07/Marek_Dobkowski_circle_1.png" alt="Marek Dobkowski circle 1" class="wp-image-28323 size-full" title="FinOps: smart strategies for cloud cost control 2" srcset="https://nearshore-it.eu/wp-content/uploads/2024/07/Marek_Dobkowski_circle_1.png 400w, https://nearshore-it.eu/wp-content/uploads/2024/07/Marek_Dobkowski_circle_1-300x300.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/07/Marek_Dobkowski_circle_1-150x150.png 150w, https://nearshore-it.eu/wp-content/uploads/2024/07/Marek_Dobkowski_circle_1-395x395.png 395w" sizes="(max-width: 400px) 100vw, 400px" /></figure><div class="wp-block-media-text__content">
<p><strong>MAREK DOBKOWSKI</strong><br>Head of Microsoft Practice / Senior Azure Architect  </p>
</div></div>
]]></content:encoded>
					
					<wfw:commentRss>https://nearshore-it.eu/webinars/finops-smart-strategies-for-cloud-cost-control/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AI Governance: the AI Maturity Journey</title>
		<link>https://nearshore-it.eu/webinars/ai-governance-the-ai-maturity-journey/</link>
					<comments>https://nearshore-it.eu/webinars/ai-governance-the-ai-maturity-journey/#respond</comments>
		
		<dc:creator><![CDATA[Piotr]]></dc:creator>
		<pubDate>Thu, 21 Nov 2024 13:47:00 +0000</pubDate>
				<category><![CDATA[Webinars]]></category>
		<category><![CDATA[Cloud engineering]]></category>
		<guid isPermaLink="false">https://nearshore-it.eu/?p=34277</guid>

					<description><![CDATA[Are you struggling to scale AI projects across your organization? You’re not alone. Did you know that less than 25% of AI projects succeed?]]></description>
										<content:encoded><![CDATA[
<p>Watch our free webinar on AI Governance: the AI Maturity Journey. Our experts will share actionable insights and practical strategies to help your organization thrive in the AI-driven world.</p>



<h2 class="wp-block-heading has-text-align-center">To access the video, please fill in the form</h2>



<div style="height:50px" aria-hidden="true" class="wp-block-spacer"></div>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow">
<h3 class="wp-block-heading">What you&#8217;ll learn:</h3>



<ol class="wp-block-list">
<li><strong>Why most AI projects fail</strong> – and how you can be part of the successful 25%.</li>



<li><strong>Building an AI-driven organization</strong> – discover the key pillars for sustainable AI adoption.</li>



<li><strong>Data-driven vs. AI-driven</strong> – learn the critical differences and why it matters.</li>



<li><strong>Crucial steps in AI adoption</strong> – what to focus on at each stage of the journey.</li>



<li><strong>Tech vs. Business Ownership</strong> – who should lead AI projects?</li>



<li><strong>Creating company-wide AI alignment</strong> – ensuring that every stakeholder is on board.</li>
</ol>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow">[contact-form-7]
</div>
</div>



<div style="height:50px" aria-hidden="true" class="wp-block-spacer"></div>



<h3 class="wp-block-heading">Speakers</h3>



<div class="wp-block-media-text is-stacked-on-mobile" style="grid-template-columns:15% auto"><figure class="wp-block-media-text__media"><img decoding="async" width="120" height="120" src="https://nearshore-it.eu/wp-content/uploads/2024/09/Marek_Czachorowski_circle120_greybg.png" alt="Marek Czachorowski circle120 greybg" class="wp-image-34203 size-full" title="AI Governance: the AI Maturity Journey 3"></figure><div class="wp-block-media-text__content">
<p><strong>Marek Czachorowski</strong><br>Head of Data &amp; AI Solutions</p>
</div></div>



<div style="height:25px" aria-hidden="true" class="wp-block-spacer"></div>



<div class="wp-block-media-text is-stacked-on-mobile" style="grid-template-columns:15% auto"><figure class="wp-block-media-text__media"><img loading="lazy" decoding="async" width="400" height="400" src="https://nearshore-it.eu/wp-content/uploads/2024/09/Krzysztof_Bratnicki_circle.png" alt="Krzysztof Bratnicki circle" class="wp-image-34206 size-full" title="AI Governance: the AI Maturity Journey 4" srcset="https://nearshore-it.eu/wp-content/uploads/2024/09/Krzysztof_Bratnicki_circle.png 400w, https://nearshore-it.eu/wp-content/uploads/2024/09/Krzysztof_Bratnicki_circle-300x300.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/09/Krzysztof_Bratnicki_circle-150x150.png 150w, https://nearshore-it.eu/wp-content/uploads/2024/09/Krzysztof_Bratnicki_circle-395x395.png 395w" sizes="auto, (max-width: 400px) 100vw, 400px" /></figure><div class="wp-block-media-text__content">
<p><strong>Krzysztof Bratnicki</strong><br>Business Development Manager  </p>
</div></div>
]]></content:encoded>
					
					<wfw:commentRss>https://nearshore-it.eu/webinars/ai-governance-the-ai-maturity-journey/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Top DevSecOps Tools: Ensuring Sensitive Data Security &#038; Compliance               </title>
		<link>https://nearshore-it.eu/articles/devops-devsecops-safeguard-sensitive-data-with-right-tools/</link>
					<comments>https://nearshore-it.eu/articles/devops-devsecops-safeguard-sensitive-data-with-right-tools/#respond</comments>
		
		<dc:creator><![CDATA[Amadeusz Kryze]]></dc:creator>
		<pubDate>Wed, 09 Oct 2024 03:30:05 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[Technologies]]></category>
		<category><![CDATA[Cloud engineering]]></category>
		<category><![CDATA[DevOps]]></category>
		<guid isPermaLink="false">https://nearshore-it.eu/?p=33399</guid>

					<description><![CDATA[Learn about DevOps vs DevSecOps and integrate security into the software development lifecycle pipeline using DevSecOps security practices. ]]></description>
										<content:encoded><![CDATA[
<p>In today&#8217;s fast-paced software development landscape, integrating security into the development process is essential for protecting sensitive data and assuring compliance. The DevSecOps model emphasizes the collaboration between development and operations teams to ensure that security practices are embedded throughout the software development lifecycle. By fostering a strong partnership between the DevSecOps team, security team, and operations team, organizations can effectively address security vulnerabilities and implement security controls early in the DevOps process. Read about DevSecOps and security tools for greater security of your IT project. &nbsp;</p>



<div class="table-of-contents">
    <p class="title"></p>
    <ol>
                    <li><a href="#What-is-DevSecOps?">1.  What is DevSecOps? </a></li>
                    <li><a href="#Benefits-of-DevSecOps-">2.  Benefits of DevSecOps </a></li>
                    <li><a href="#DevSecOps-vs-DevOps-">3.  DevSecOps vs DevOps </a></li>
                    <li><a href="#DevSecOps-for-enhanced-application-security-">4.  DevSecOps for enhanced application security </a></li>
                    <li><a href="#DevSecOps-culture">5.  DevSecOps culture </a></li>
                    <li><a href="#Continuous-Integration">6.  Continuous Integration </a></li>
                    <li><a href="#Continuous-Delivery">7.  Continuous Delivery </a></li>
                    <li><a href="#Continuous-Security">8.  Continuous Security </a></li>
                    <li><a href="#Communication-and-collaboration-">9.  Communication and collaboration </a></li>
                    <li><a href="#DevSecOps-best-practices:-shift-security-left">10.  DevSecOps best practices: shift security left </a></li>
                    <li><a href="#Implementing-DevSecOps-and-automating-security-best-practices-">11.  Implementing DevSecOps and automating security best practices </a></li>
                    <li><a href="#Recommended-DevSecOps-tools">12.  Recommended DevSecOps tools </a></li>
                    <li><a href="#Successful-DevSecOps-and-DevOps-Integration-">13.  Successful DevSecOps and DevOps Integration </a></li>
            </ol>
</div>


<h2 class="wp-block-heading" id="What-is-DevSecOps?">What is DevSecOps?&nbsp;</h2>



<p>In the realm of modern craft, there arose a practice known among the learned as DevSecOps, a union of three great disciplines: development, security, and operations. It was not unlike the forging of a mighty alliance, wherein each realm must contribute its strength, lest their endeavors fall prey to the shadows of vulnerability.&nbsp;</p>



<p>In days past, many development teams would wait until the final hour, when the code was near release, before calling upon the wardens of security. Yet this path was fraught with peril, for to uncover weaknesses so late would often cost dear in time, gold, and effort. But the wisest of realms soon saw another way.&nbsp;</p>



<h2 class="wp-block-heading" id="Benefits-of-DevSecOps-">Benefits of DevSecOps&nbsp;</h2>



<p>By weaving security into every phase of the software development lifecycle – whether in the laying of the first line of code or the final shaping of the product – teams could safeguard their work from the outset and prevent a number of security issues. Through deep collaboration and the magic of automation, they crafted a system where all shared the burden of protection. Thus, no longer was security the task of the few, but of all who toiled together, forging a shield that would hold fast even in the face of the darkest threats.&nbsp;</p>



<p>In this way, DevSecOps was not merely a method, but a way of ensuring that no creation would leave the forges unguarded against the unseen dangers.&nbsp;&nbsp;</p>



<h2 class="wp-block-heading" id="DevSecOps-vs-DevOps-">DevSecOps vs DevOps&nbsp;</h2>



<p>In terms of traditional software development, the crafting of great projects was long governed by the old ways, where the work was divided into phases – like the seasons of the year – each flowing one after the other. &nbsp;</p>



<p>There would come a time for planning, then design, and only after would the labor of development begin, followed by the testing and binding of all parts into a whole. Yet, this process, while orderly, was as slow as the march of the seasons themselves. In an age where customers&#8217; desires are ever-shifting, such a pace no longer sufficed.&nbsp;</p>



<p>Worse still, security specialists were oft called upon at the very end, when the work was near complete, to cast their protections over the product. Alas, this lateness was fraught with danger, for vulnerabilities uncovered at such a time could unravel much of the work that had come before.&nbsp;</p>



<p>Thus, many turned to a new way: the DevOps model. Here, rather than waiting for the long seasons to pass, the DevOps teams delivered smaller, yet finely crafted parcels of work – each one polished and ready – rather than undertaking vast projects that stretched on for years. &nbsp;</p>



<p>In this way, the teams of development and operations joined forces, testing and refining their work as they went. By using the tools of automation and forging standardized processes, they moved with great speed, yet kept the quality of their software product intact.&nbsp;</p>



<h2 class="wp-block-heading" id="DevSecOps-for-enhanced-application-security-">DevSecOps for enhanced application security&nbsp;</h2>



<p>But in time, the guilds realized there was a further step to be taken. They sought not just swiftness, but security woven into every step of their journey. And so was born DevSecOps, where security was not an afterthought but a companion to every stage of the craft. From the very first whispers of planning, security was present, and in the fires of development, it was tested and shaped alongside the code. The burden of security threat protection no longer lay on the shoulders of a few, but on the entire fellowship of creators. This new way became known to some as ‘shift left security&#8217;, for it brought the guardians of protection to the forefront of the process, ensuring that no ill would befall their work from the very start of the journey to its end.&nbsp;</p>



<p><strong>Read also:</strong> </p>



<ul class="wp-block-list">
<li><a href="https://nearshore-it.eu/technologies/azure-cost-management-101-how-to-optimize-cloud-costs/">Azure Cost Management 101</a></li>



<li><a href="https://nearshore-it.eu/technologies/cloud-native/">Cloud-native applications: what do you need to know?</a></li>
</ul>



<h3 class="wp-block-heading">Why is DevSecOps important?&nbsp;</h3>



<p>Many foes are seeking to gain entry and plunder company&#8217;s most valuable treasures – its data and assets. One of the most common practices they can employ, is the exploitation of weaknesses, hidden deep within the folds of the organization&#8217;s own software. These vulnerabilities, if left unchecked, are like cracks in the foundation of a mighty fortress, and through them, adversaries can slip in unnoticed, wreaking havoc from within.&nbsp;</p>



<p>Such breaches can be devastating. They consume both time and money, and in their wake, they leave scars upon a company&#8217;s name, causing trust to wither among clients and partners alike.&nbsp;</p>



<h2 class="wp-block-heading" id="DevSecOps-culture">DevSecOps culture&nbsp;</h2>



<p>But there is hope within DevSecOps. By following this path, the guilds of development, security, and operations stand together, ever watchful. The framework is like a vigilant sentry, reducing the risk of sending forth software riddled with flaws, misconfigurations, or vulnerabilities. Through constant vigilance and the weaving of security into every phase of creation, they close the gates through which bad actors might pass, fortifying their code so that it may stand strong against the onslaught of those who would seek to exploit it. Thus, the company stands firm, its reputation unshaken, its defenses prepared for the battles yet to come.&nbsp;</p>



<h2 class="wp-block-heading" id="Continuous-Integration">Continuous Integration&nbsp;</h2>



<p>In the world of software development, there once was a time when the labor of many artisans was brought together only at the final hour, when all the pieces were near completion. But it was often in those moments, at the very end, that flaws were revealed, and the seams of their work would unravel, leaving them with a tangle of issues too great to swiftly resolve.&nbsp;</p>



<p>But a new method arose, known as Continuous Integration, a practice where the builders of code did not wait for the end to unite their works. Instead, they would commit their efforts to a central repository many times throughout the day, like artisans returning to the hearth to meld their creations together piece by piece. Each time they did so, their work was automatically tested and integrated, ensuring that all parts fit together in harmony.&nbsp;</p>



<p>By catching integration issues and bugs early, long before the final forge was set aflame, this approach saved the guilds from the chaos of last-minute discoveries.&nbsp;&nbsp;</p>



<h2 class="wp-block-heading" id="Continuous-Delivery">Continuous Delivery&nbsp;</h2>



<p>Building upon the foundation of Continuous Integration, a further practice emerged, known as Continuous Delivery. Where the former ensured that code was swiftly integrated and tested, Continuous Delivery took this one step further, automating the journey from the builder&#8217;s hands to a staging environment, where code would be tested.&nbsp;</p>



<p>Once the code reached this staging ground, it did not rest. The system immediately set to work, not only with unit testing, but with a series of trials to ensure that all aspects of the creation were sound. The user interface was inspected to ensure it responded as intended, the seams of integration were examined for any weaknesses, and the APIs were tested to confirm they communicated well between systems. The code was also tested under the weight of simulated traffic, to see if it could bear the burden of the many users it would one day serve.&nbsp;</p>



<p>The aim of Continuous Delivery is simple: to consistently deliver code that was not only complete, but of true value.&nbsp;&nbsp;</p>



<h2 class="wp-block-heading" id="Continuous-Security">Continuous Security&nbsp;</h2>



<p>DevSecOps practice, one of the most vital elements is the weaving of security into every step of the software&#8217;s journey, from the first step of design to the final unveiling. No longer could security be treated as an afterthought, called upon only when the work was nearly complete. Instead, it became a core part of the process, guiding the work like an unseen but ever-present hand.&nbsp;</p>



<p>From the earliest stages, when the blueprints of the software were still taking shape, the guilds would engage in threat modeling, a process of foresight where they sought to uncover any potential dangers. They did not wait for the enemy to strike, but anticipated its moves, fortifying their code against unseen attacks before they could take root.&nbsp;</p>



<p>As the work progressed, automated security testing was woven into every stage of the DevOps workflow. Automation played a key role, testing the code continuously, from the developers&#8217; own environments to the farthest reaches of the deployment pipeline. No phase was left unguarded; no section of code went untested.&nbsp;</p>



<p>Through this constant vigilance – testing early and testing often – the teams were able to find and mend weaknesses swiftly. And so, they delivered their software products with confidence, knowing that each line of code was protected. With DevSecOps, the road to production became one of fewer pitfalls and greater security, allowing organizations to deliver secured software swiftly and with minimal issues.  &nbsp;</p>



<h2 class="wp-block-heading" id="Communication-and-collaboration-">Communication and collaboration&nbsp;</h2>



<p>In the practice of DevSecOps, the strength of the software does not lie solely in the application security tools or the processes but in the fellowship of those who undertake the journey together. It is a path that demands more than mere skill; it calls for deep collaboration and unity of purpose among individuals and teams.&nbsp;</p>



<p>When the developers commit their work to the central repository in the practice of Continuous Integration, conflicts in code inevitably arise. But it is through collaboration – where minds come together, and voices are heard – that these challenges are resolved. Developers, security experts, and operations alike must work side by side, swiftly addressing these conflicts so the flow of progress is not hindered.&nbsp;</p>



<p>But beyond the technical work, there is a greater need – communication. Teams must speak openly and often, sharing their visions and aligning around the same goals. Without this shared understanding, the work would drift in many directions, and the efforts of one might undo the labors of another. In DevSecOps approach, every hand contributes to the same creation, and every voice is heard in the great chorus.&nbsp;&nbsp;</p>



<h2 class="wp-block-heading" id="DevSecOps-best-practices:-shift-security-left">DevSecOps best practices: shift security left&nbsp;</h2>



<h3 class="wp-block-heading">Planning and development&nbsp;</h3>



<p>Introducing security policies and addressing security risks early into the rhythm of development sprints is akin to fortifying the foundations of a great structure before the first stones are laid. By addressing vulnerabilities in the early stages, teams not only reduce the risk of future threats but also save valuable time, for it is far easier to mend potential flaws before the code has been built and added to the greater whole. &nbsp;</p>



<p>At the start of every sprint, during planning and development, threat modeling becomes a crucial tool, a map guiding the teams to uncover and mitigate potential dangers long before they can occur. By identifying these threats early, security is no longer something that is added at the end, but something that is woven into the very fabric of the application from the outset.&nbsp;</p>



<p>To ensure security, before the code is committed to the shared repository, automated checks are employed, acting as vigilant sentries. Integrated development environment (IDE) security plug-ins provide developers with immediate feedback, warning them if their code harbors a potential risk. These automated checks catch flaws early, empowering developers to address them before they can take root.&nbsp;</p>



<p>As the code continues its journey, passing from one set of hands to the next, the software is further refined during the code review. Here, someone with the knowledge of security steps forward, offering their insight and making recommendations to bolster the work. This expertise ensures that, by the time the code is ready to move on, it is fortified against threats, functional, and valuable. &nbsp;&nbsp;</p>



<h3 class="wp-block-heading">Code commit&nbsp;</h3>



<p>A cornerstone of the DevSecOps process lies in the practice of continuous integration, a discipline that ensures code is not simply created in isolation but is constantly woven into the central repository, allowing teams to catch issues before they can fester. Developers, like diligent artisans, commit their code several times a day, ensuring that each piece of work seamlessly fits into the greater whole. This frequent integration allows potential conflicts or errors to be discovered early, long before they can threaten the stability of the project.&nbsp;</p>



<p>However, to truly safeguard the craft, it is vital to introduce security into this phase. Automated security checks must stand guard alongside the integration process. These include scanning third-party libraries and dependencies – those external pieces of code that, while useful, may harbor unseen vulnerabilities. Unit testing ensures the smallest parts of the code function as they should, while static application security testing (SAST) reviews the code for weaknesses, searching for hidden threats that might otherwise go unnoticed.&nbsp;</p>



<p>But safeguarding the code itself is not enough. The continuous integration (CI) and continuous delivery (CD) infrastructure, which carries this code from creation to deployment, must also be protected. Role-based access controls (RBAC) play a crucial role in this defense, limiting access to the system based on the specific roles of individuals. By ensuring that only those with the right permissions can interact with the CI/CD infrastructure, teams protect it from attackers who might seek to run malicious code or steal credentials.&nbsp;</p>



<p>In this way, the continuous integration process becomes not only a means to unite code swiftly and efficiently but also a stronghold against external threats. Security is built into every layer, from the automated checks that scan the code to the protections guarding the very systems that bring the work to life.&nbsp;</p>



<h3 class="wp-block-heading">Building and testing&nbsp;</h3>



<p>In DevSecOps, where vigilance is paramount, the test environment serves as a proving ground for code before it ventures into production. Here, automated security scripts are seeking out potential threats that may have slipped past earlier defenses. By running these tests in a controlled environment, teams can uncover hidden vulnerabilities and ensure their work remains strong and secure. &nbsp;<br>&nbsp;<br><strong>DAST</strong>&nbsp;</p>



<p>Among the many tests that can be employed during this phase is Dynamic Application Security Testing (DAST), which simulates real-world attacks against the running application. Unlike static tests, DAST operates while the application is live, identifying vulnerabilities such as: cross-site scripting, SQL injection, and other dangerous flaws.&nbsp;</p>



<p><strong>Infrastructure scanning&nbsp;</strong>&nbsp;</p>



<p>Infrastructure scanning follows, casting its gaze across the entire architecture, from servers to networks, searching for weaknesses in the foundational layers that might allow an attacker entry. For those employing containers as part of their deployment strategy, container scanning ensures that these lightweight units of software do not harbor vulnerabilities in their dependencies or configurations, fortifying them before they are deployed.&nbsp;</p>



<p><strong>Cloud configuration validation</strong>&nbsp;</p>



<p>In the age of the cloud, where infrastructure is often abstracted and spread across vast digital environments, cloud configuration validation becomes crucial. By checking the configurations of cloud resources, teams can ensure that no misconfigurations – such as excessive permissions or insecure access points – expose their environments to unnecessary risk.&nbsp;</p>



<p>Lastly, security acceptance testing ensures that all necessary security requirements are met. This step serves as the final safeguard, confirming that the code and infrastructure are not only functional but fortified against known threats and risks minimized.&nbsp;</p>



<h3 class="wp-block-heading">Production&nbsp;</h3>



<p>Once the application has been deployed to production and stands in the real world, some organizations take a proactive step to uncover any remaining weaknesses by engaging in penetration testing. This practice is more than just another test – it is a deliberate attempt to breach the application as an attacker might, with real-world tactics and determination.&nbsp;</p>



<p>In penetration testing, skilled individuals, often referred to as ethical hackers, adopt the mindset of a potential adversary. They probe the application for weaknesses, using the same strategies and tools a malicious actor might employ. These tests can range from exploiting known vulnerabilities in third-party components to more sophisticated attacks aimed at bypassing the application&#8217;s defenses.&nbsp;</p>



<p>The goal is simple: to expose any weaknesses that might have slipped through the earlier layers of security testing, those that could potentially be exploited. By simulating real-world attack scenarios, penetration testing reveals how the application holds up under direct assault, whether it&#8217;s vulnerable to unauthorized access, data breaches, or other forms of compromise.&nbsp;</p>



<p>This phase is crucial for understanding not just theoretical vulnerabilities but how the system behaves in a live environment, where the s 1takes are highest. Penetration testing provides organizations with invaluable insights into the robustness of their defenses, allowing them to patch any remaining weaknesses before an actual attacker can exploit them. Thus, it becomes the final line of preparation, ensuring that the application is truly ready to stand firm against threats in the production environment.&nbsp;</p>



<h3 class="wp-block-heading">Operation&nbsp;</h3>



<p>Even with the most robust DevSecOps process, no system is entirely resistant to evolving threats. This is why continuous monitoring of applications becomes essential once they are deployed. By maintaining constant vigilance, organizations can quickly detect, respond to, and mitigate any new vulnerabilities, unforeseen threats and risks before they cause significant harm.&nbsp;</p>



<p>Monitoring tools look for signs of irregularities, scanning for vulnerabilities, unauthorized access attempts, or other suspicious activities that might signal a breach or weakness. These tools provide real-time insights, alerting teams to potential issues the moment they arise.&nbsp;&nbsp;</p>



<p>To further strengthen this defense, analytics data plays a key role. By analyzing patterns and trends in security events, teams can evaluate the effectiveness of their security posture. This data offers valuable insights into how well current defenses are performing, allowing organizations to track whether they are improving over time or if new vulnerabilities are emerging. It also highlights areas that may require optimization, guiding future efforts in reinforcing the system.&nbsp;&nbsp;</p>



<p>Bear in mind, however, that in the world of security, the battle is never truly over. </p>



<p><strong>Read also:</strong> </p>



<ul class="wp-block-list">
<li><a href="https://nearshore-it.eu/technologies/cloud-agnostic-applications-pros-and-cons-of-cloud-agnostic-strategies/">Cloud-Agnostic Applications: Pros and Cons</a></li>



<li><a href="https://nearshore-it.eu/best-practices/horizontal-vs-vertical-scaling/">Horizontal vs Vertical Scaling: A 101 Guide</a></li>



<li><a href="https://nearshore-it.eu/technologies/cloud-computing-trends-for-2023-2025/">How to gain a cloud advantage? Here are 7 cloud computing trends for 2023 – 2025 </a></li>
</ul>



<h2 class="wp-block-heading" id="Implementing-DevSecOps-and-automating-security-best-practices-">Implementing DevSecOps and automating security best practices&nbsp;</h2>



<p>I bid you to consider these tools as you embark upon the journey of DevSecOps automation within your organization. Some are like fruit hanging low upon the bough, easily gathered and swiftly put to use, while others may lie deeper within the forest, requiring more effort to attain. Yet, though the path to them may be more difficult, the rewards they yield are well worth the quest.&nbsp;</p>


</style><div class="promotion-box promotion-box--image-left "><div class="tiles latest-news-once"><div class="tile"><div class="tile-image"><img decoding="async" src="https://nearshore-it.eu/wp-content/uploads/2024/07/Marek-Dobkowski-1-bezloczkowy-kwadrat.jpg" alt="Marek Dobkowski 1 bezloczkowy kwadrat" title="Top DevSecOps Tools: Ensuring Sensitive Data Security &amp; Compliance                5"></div><div class="tile-content"><p class="entry-title client-name promotion-box__headline2">Want to gain cost and competitive advantage in the cloud?</p>
<p class="promotion-box__description2">Consult with <strong>Marek Dobkowski</strong>, Head of Microsoft Practice, for expert guidance.</p>
<a class="btn btn-primary booking" href="https://outlook.office365.com/book/BookameetingwithMarek1@gfi.fr/" target="_blank" rel="noopener">Let's talk!</a>





</div></div></div></div>



<h2 class="wp-block-heading" id="Recommended-DevSecOps-tools">Recommended DevSecOps tools&nbsp;</h2>



<h3 class="wp-block-heading">Trivvy&nbsp;</h3>



<p>Trivy has risen to prominence as a trusted solution among open-source security scanners, valued for its reliability, swiftness, and simplicity. It offers a far-reaching array of security checks, making it a vital companion for those seeking to fortify their DevSecOps practices. For teams looking to secure their realms of code and infrastructure, Trivy stands as a steadfast tool, ever vigilant and ready to ensure the safety of their creations.&nbsp;</p>



<p><strong>Targets (what Trivy can scan):&nbsp;</strong></p>



<ul class="wp-block-list">
<li>Container Image&nbsp;</li>



<li>Filesystem&nbsp;</li>



<li>Git Repository (remote)&nbsp;</li>



<li>Virtual Machine Image&nbsp;</li>



<li>Kubernetes&nbsp;</li>



<li>AWS&nbsp;</li>
</ul>



<p><strong>Scanners (what Trivy can find there):&nbsp;</strong></p>



<ul class="wp-block-list">
<li>OS packages and software dependencies in use (SBOM)&nbsp;</li>



<li>Known vulnerabilities (CVEs)&nbsp;</li>



<li>IaC issues and misconfigurations&nbsp;</li>



<li>Sensitive information and secrets&nbsp;</li>



<li>Software licenses&nbsp;</li>
</ul>



<h3 class="wp-block-heading">Trufflehog&nbsp;</h3>



<p>TruffleHog tool is a masterful seeker of concealed passwords and keys. Like a skilled ranger, TruffleHog ventures where few dare to tread, unearthing the hidden secrets that, if left unchecked, could spell doom for the unwary.&nbsp;</p>



<p><strong>How TruffleHog wields its power:</strong>&nbsp;</p>



<ul class="wp-block-list">
<li><strong>Detect: </strong>TruffleHog scours the history of all platforms, much like a wise lorekeeper sifting through ancient scrolls, seeking out long-forgotten secrets. Yet it looks not only in the obvious places but also in the whispers of comments, the hidden folds of Docker images, and other obscure corners.&nbsp;</li>



<li><strong>Analyze:</strong> TruffleHog reveals the true nature of the secrets it uncovers, discerning what resources and permissions are tied to API keys and other tokens. Remarkably, it achieves this without ever needing to peer into the provider&#8217;s vault.&nbsp;</li>



<li><strong>Prevent:</strong> To stop the ill-fated inclusion of secrets from the very beginning, TruffleHog sets traps at key points, using pre-commit and pre-receive hooks. These safeguards ensure that no sensitive data is unintentionally leaked before it ever leaves the developer&#8217;s hand.&nbsp;</li>



<li><strong>Remediate:</strong> TruffleHog continues to track the fate of discovered keys and secrets. It verifies that remediation is complete, sending reminders on preferred platforms and providing knowledge to users on how to properly manage and secure the keys that were once at risk.&nbsp;</li>
</ul>



<p><strong>Why TruffleHog is a worthy ally:</strong>&nbsp;</p>



<ul class="wp-block-list">
<li><strong>Comprehensive multi-branch analysis: </strong>TruffleHog does not simply guard the main road but patrols every path. It scans all branches, not just the primary one, ensuring the same level of vigilance across the entire project. This is especially valuable in larger domains where many branches are being worked on in tandem.&nbsp;</li>



<li><strong>Credential verification:</strong> TruffleHog employs programmatic verification, testing each credential using its own protocol or API. This removes the false trails, ensuring that only real threats are brought to light.&nbsp;</li>



<li><strong>Open-source fellowship: </strong>As with any great alliance, TruffleHog thrives through the support of an open-source community. Many dedicated hands join together to audit and improve the tool, ensuring that no single voice carries undue weight. The community checks and balances each other&#8217;s work, so that trust is shared among all.&nbsp;<br>&nbsp;</li>
</ul>



<h3 class="wp-block-heading">Snyk&nbsp;</h3>



<p>This platform guards the entirety of an application&#8217;s journey – from the very first lines of code to its deployment in the cloud. Through its guidance, developers may discover and mend vulnerabilities before they are ever loosed upon the world.&nbsp;</p>



<p><strong>The powers of Snyk:</strong>&nbsp;</p>



<ul class="wp-block-list">
<li><strong>Snyk open source: </strong>Snyk scours open-source libraries and dependencies, seeking out vulnerabilities. And when such flaws are found, it does not merely warn the developer but offers a swift means to mend them, restoring the strength of the code.&nbsp;</li>



<li><strong>Snyk code:</strong> As code is written, Snyk watches in real-time, finding and fixing vulnerabilities within the very heart of the application. It is like a companion at the developer&#8217;s side, ever watchful and ready to lend its aid.&nbsp;&nbsp;</li>



<li><strong>Snyk container:</strong> In the context of containers and Kubernetes, where applications are housed, Snyk&#8217;s gaze does not falter. It delves into container images, finding and repairing potentially harmful vulnerabilities.&nbsp;&nbsp;</li>



<li><strong>Snyk Infrastructure as Code: </strong>With great foresight, Snyk peers into the blueprints of infrastructure itself, examining the configurations of Terraform and Kubernetes code. Should it find any insecurity in the very foundation, it offers swift guidance on how to rectify these flaws, ensuring that the structure remains strong and secure.&nbsp;</li>
</ul>



<h3 class="wp-block-heading">Pre-commit&nbsp;</h3>



<p>&nbsp;Pre-commit is a powerful system for managing and maintaining pre-commit hooks across many programming languages. Pre-commit ensures that no errant detail is left unchecked before the code is sent for review.&nbsp;</p>



<p>In the world of Git, hook scripts act as a safeguard, catching simple errors before they reach the eyes of a reviewer. Whenever a developer commits their work, these hooks spring into action, pointing out issues such as missing semicolons, trailing whitespace, or forgotten debug statements. By addressing these small matters early, Pre-commit allows the reviewer to focus on the grand architecture of the changes, rather than wasting time on trivial style errors.&nbsp;</p>



<h3 class="wp-block-heading">Wazuh&nbsp;</h3>



<p>Free and open to all, Wazuh is skilled in the arts of threat prevention, detection, and response. It is a protector capable of defending the realms of on-premises fortresses, virtualized strongholds, containerized ships, and vast cloud kingdoms alike.&nbsp;</p>



<p>The strength of Wazuh lies in two parts: its endpoint security agents, which are deployed like watchful sentinels to the systems they protect, and its management server, a wise and ever-alert overseer.&nbsp;</p>



<p>The agents gather knowledge and data from the systems they monitor, and the management server collects, analyzes, and interprets this information, ever vigilant for signs of danger.&nbsp;</p>



<p>Wazuh, whem integrated with Elastic Stack, offers seamless navigation through security alerts, enhancing visibility and threat detection. By combining it with SIEM and XDR, you can gain protection for IT assets, responding to potential security dangers. &nbsp;<br>&nbsp;<br><strong>Use-cases:</strong>&nbsp;</p>



<ul class="wp-block-list">
<li>Configuration assessment&nbsp;</li>



<li>Malware detection&nbsp;</li>



<li>File integrity monitoring&nbsp;</li>



<li>Threat hunting&nbsp;</li>



<li>Log data analysis&nbsp;</li>



<li>Vulnerability detection&nbsp;</li>



<li>Incident response&nbsp;</li>



<li>Regulatory compliance&nbsp;</li>



<li>IT hygiene&nbsp;</li>



<li>Containers security&nbsp;</li>



<li>Posture management&nbsp;</li>



<li>Workload protection&nbsp;<br>&nbsp;</li>
</ul>



<h2 class="wp-block-heading" id="Successful-DevSecOps-and-DevOps-Integration-">Successful DevSecOps and DevOps Integration&nbsp;</h2>



<p>The integration should be as natural as the turning of the seasons, an organic and seamless process that unfolds with time. It is not a single task to be completed and forgotten, but a continuous journey.   Though at times it may call for a shift in the very culture of the organization, such change is not forced, but arises naturally&nbsp;</p>



<p>And as in all great works of creation, what we forge must be shaped by the needs of people. The processes we follow and the tools we wield must be chosen wisely, fitting the unique contours of our organization Only then can the integration thrive, as both DevOps and DevSecOps become not separate disciplines, but part of the same living tapestry, woven together in purpose and vision.&nbsp;</p>


</style><div class="promotion-box promotion-box--image-left promotion-box--full-width-without-image"><div class="tiles latest-news-once"><div class="tile"><div class="tile-content"><p class="promotion-box__description2"><strong>Consult your project directly with a specialist</strong></p>
<a class="btn btn-primary booking" href="https://outlook.office365.com/book/BookameetingwithMarek1@gfi.fr/" target="_blank" rel="noopener">Book a meeting</a></div></div></div></div>



<p>Read also:</p>



<ul class="wp-block-list">
<li><a href="https://nearshore-it.eu/articles/devops-monitoring-systems/">DevOps monitoring systems</a></li>



<li><a href="https://nearshore-it.eu/technologies/azure-durable-function-in-serverless-programming/">Azure Durable Functions – the extension to Azure Functions</a></li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://nearshore-it.eu/articles/devops-devsecops-safeguard-sensitive-data-with-right-tools/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>BizDevOps 101 – Can you do it yourself?</title>
		<link>https://nearshore-it.eu/webinars/bizdevops-101-can-you-do-it-yourself/</link>
					<comments>https://nearshore-it.eu/webinars/bizdevops-101-can-you-do-it-yourself/#respond</comments>
		
		<dc:creator><![CDATA[Piotr]]></dc:creator>
		<pubDate>Fri, 13 Sep 2024 11:12:00 +0000</pubDate>
				<category><![CDATA[Webinars]]></category>
		<category><![CDATA[Cloud engineering]]></category>
		<guid isPermaLink="false">https://nearshore-it.eu/?p=34332</guid>

					<description><![CDATA[Integrate BizDevOps into your organization seamlessly and ensure effective collaboration between business and IT.]]></description>
										<content:encoded><![CDATA[
<p>Are you a business leader struggling to align your company&#8217;s strategic goals with your IT operations? Or perhaps an IT manager facing challenges in communicating and implementing business requirements effectively? This webinar is designed for professionals like you, who are seeking to bridge the gap between business and IT and achieve a harmonious, productive work environment. </p>



<h2 class="wp-block-heading has-text-align-center">To access the video, please fill in the form</h2>



<div style="height:50px" aria-hidden="true" class="wp-block-spacer"></div>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow">
<h3 class="wp-block-heading">Webinar highlights</h3>



<ol class="wp-block-list">
<li><strong>“Frameworks- the history, evolution, and the rise&#8221;</strong><br>Evolving from traditional Waterfall approach through Agile to understanding today&#8217;s challenges.</li>



<li><strong>Evolution from DevOps to BizDevOps</strong><br>The significance of merging business and IT decision-making, and explain why BizDevOps is a natural progression in the industry. </li>



<li><strong>A high-level overview of BizDevOps aspects </strong><br>Overview, theory, and examples (with solutions for our imaginary corporation) for each aspect.</li>



<li><strong>Detailed focus and summary of key aspects </strong>
<ul class="wp-block-list">
<li><strong>Align: </strong>How to ensure business and IT goals are synchronized?</li>



<li><strong>Define: </strong>Establishing clear objectives and requirements. </li>



<li><strong>Approve:</strong> Streamlining decision-making processes.</li>
</ul>
</li>



<li><strong>Real communication challenges and solutions</strong><br>Case study
<ul class="wp-block-list">
<li>Addressing communication barriers within organizations.</li>



<li>Strategies to overcome decision freezes and management resistance.</li>
</ul>
</li>
</ol>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow">[contact-form-7]
</div>
</div>



<div style="height:50px" aria-hidden="true" class="wp-block-spacer"></div>



<h3 class="wp-block-heading">Meet our speaker</h3>



<div class="wp-block-media-text is-stacked-on-mobile" style="grid-template-columns:15% auto"><figure class="wp-block-media-text__media"><img loading="lazy" decoding="async" width="512" height="512" src="https://nearshore-it.eu/wp-content/uploads/2024/12/Amadeusz_Kryze_circle512.png" alt="Amadeusz Kryze" class="wp-image-34333 size-full" title="BizDevOps 101 – Can you do it yourself? 6" srcset="https://nearshore-it.eu/wp-content/uploads/2024/12/Amadeusz_Kryze_circle512.png 512w, https://nearshore-it.eu/wp-content/uploads/2024/12/Amadeusz_Kryze_circle512-300x300.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/12/Amadeusz_Kryze_circle512-150x150.png 150w, https://nearshore-it.eu/wp-content/uploads/2024/12/Amadeusz_Kryze_circle512-495x495.png 495w, https://nearshore-it.eu/wp-content/uploads/2024/12/Amadeusz_Kryze_circle512-395x395.png 395w" sizes="auto, (max-width: 512px) 100vw, 512px" /></figure><div class="wp-block-media-text__content">
<p><strong>Amadeusz Kryze</strong><br>Principal DevOps Leader at Inetum</p>



<p style="font-size:16px">The Dev[Sec]Ops Technology Division Manager, with over ten years of experience in the DevOps and DevSecOps industry, focused on gathering and implementing best practices tailored to individual project needs.  </p>
</div></div>



<div style="height:25px" aria-hidden="true" class="wp-block-spacer"></div>
]]></content:encoded>
					
					<wfw:commentRss>https://nearshore-it.eu/webinars/bizdevops-101-can-you-do-it-yourself/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Cut Costs, Boost Efficiency: Your Seamless Migration from VMware to Azure</title>
		<link>https://nearshore-it.eu/webinars/cut-costs-boost-efficiency-your-seamless-migration-from-vmware-to-azure/</link>
					<comments>https://nearshore-it.eu/webinars/cut-costs-boost-efficiency-your-seamless-migration-from-vmware-to-azure/#respond</comments>
		
		<dc:creator><![CDATA[NOPR]]></dc:creator>
		<pubDate>Thu, 12 Sep 2024 12:46:03 +0000</pubDate>
				<category><![CDATA[Webinars]]></category>
		<category><![CDATA[Cloud engineering]]></category>
		<guid isPermaLink="false">https://nearshore-it.eu/?p=32258</guid>

					<description><![CDATA[This session is designed to help you understand why recent VMware price increases a great reason are to migrate to the Azure cloud and how to effectively manage these changes.]]></description>
										<content:encoded><![CDATA[
<p>Don&#8217;t miss this opportunity to learn how to optimize your cloud costs and make a seamless transition from VMware to Azure.</p>



<h2 class="wp-block-heading has-text-align-center">To access the video, please fill in the form</h2>



<div style="height:50px" aria-hidden="true" class="wp-block-spacer"></div>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow">
<h2 class="wp-block-heading">Agenda</h2>



<p>During the webinar, we have discussed the following aspects:</p>



<ul class="wp-block-list">
<li><strong>Overview of VMware pricing</strong> &#8211; a brief overview of VMware&#8217;s old pricing model and recent changes</li>



<li><strong>Impact of price increases</strong> on businesses and IT departments</li>



<li><strong>Discussing Azure solution</strong>, which can offer more cost-effective and efficient alternatives</li>



<li><strong>Benefits and challenges</strong> of migrating to cloud-based virtualization solutions as a response to VMware price increases</li>



<li><strong>Strategies for managing price increases </strong>&#8211; strategies for managing the impact of the price increases.</li>
</ul>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow">[contact-form-7]
</div>
</div>



<div style="height:50px" aria-hidden="true" class="wp-block-spacer"></div>



<h2 class="wp-block-heading">Speaker</h2>



<div class="wp-block-media-text is-stacked-on-mobile" style="grid-template-columns:15% auto"><figure class="wp-block-media-text__media"><img loading="lazy" decoding="async" width="200" height="200" src="https://nearshore-it.eu/wp-content/uploads/2024/09/dawid-janasik.png" alt="Software / Systems Engineer &amp; DevOps Team Leader" class="wp-image-32259 size-full" title="Cut Costs, Boost Efficiency: Your Seamless Migration from VMware to Azure 7" srcset="https://nearshore-it.eu/wp-content/uploads/2024/09/dawid-janasik.png 200w, https://nearshore-it.eu/wp-content/uploads/2024/09/dawid-janasik-150x150.png 150w" sizes="auto, (max-width: 200px) 100vw, 200px" /></figure><div class="wp-block-media-text__content">
<p><strong>Dawid JANASIK</strong></p>



<p>Software / Systems Engineer &amp; DevOps Team Leader in Inetum with over 10 years spent working on many challenging projects for different clients &amp; many industries. He is a certified engineer working on extending his know-how regarding DevOps &amp; cloud-native solutions.</p>
</div></div>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://nearshore-it.eu/webinars/cut-costs-boost-efficiency-your-seamless-migration-from-vmware-to-azure/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG</title>
		<link>https://nearshore-it.eu/articles/create-ai-chat-with-semantic-kernel/</link>
					<comments>https://nearshore-it.eu/articles/create-ai-chat-with-semantic-kernel/#respond</comments>
		
		<dc:creator><![CDATA[Marek Dobkowski]]></dc:creator>
		<pubDate>Fri, 30 Aug 2024 10:25:38 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[Technologies]]></category>
		<category><![CDATA[Cloud engineering]]></category>
		<guid isPermaLink="false">https://nearshore-it.eu/?p=28857</guid>

					<description><![CDATA[Discover how to create AI chat apps with Semantic Kernel by Microsoft. Learn to build agents using .NET and integrate large language models effortlessly]]></description>
										<content:encoded><![CDATA[
<p>Over the last year, Generative AI has become a popular tool for creating various forms of content, including text, images, and audio. Many developers are now exploring how to incorporate these systems into their applications to benefit their users.</p>



<p>Despite the rapid advancement of technology and the constant release of new models and SDKs, it can be difficult for developers to know where to begin. While there are many polished end-to-end sample applications available for .NET developers to use as a reference, some may prefer to build their applications incrementally, starting with the basics and gradually adding more advanced features.</p>



<div class="table-of-contents">
    <p class="title"></p>
    <ol>
                    <li><a href="#Building-a-console-based-.NET-chat-app-with-solutions-like-Semantic-Kernel">1.  Building a console-based .NET chat app with solutions like Semantic Kernel</a></li>
                    <li><a href="#How-to-get-started-with-Semantic-Kernel-SDK.-Learn-how-to-use-it">2.  How to get started with Semantic Kernel SDK. Learn how to use it</a></li>
                    <li><a href="#Leveraging-Semantic-functions:-reusable-prompts,-dynamic-input-handling,-and-plugins">3.  Leveraging Semantic functions: reusable prompts, dynamic input handling, and plugins</a></li>
                    <li><a href="#Does-LLM-have-memory?-How-to-use-Semantic-Kernel-to-overcome-statelessness-in-chat-agents">4.  Does LLM have memory? How to use Semantic Kernel to overcome statelessness in chat agents</a></li>
                    <li><a href="#Does-LLM-know-everything?-Connectors-and-RAG-with-Semantic-plugins-and-native-functions">5.  Does LLM know everything? Connectors and RAG with Semantic plugins and native functions</a></li>
                    <li><a href="#Storing-memories">6.  Storing memories</a></li>
                    <li><a href="#Enhancing-your-Semantic-integration:-use-the-Semantic-best-practices-and-considerations">7.  Summary</a></li>
            </ol>
</div>


<h2 class="wp-block-heading" id="Building-a-console-based-.NET-chat-app-with-solutions-like-Semantic-Kernel">Building a console-based .NET chat app with solutions like Semantic Kernel</h2>



<p>This post aims to guide developers in building a simple console-based .NET chat application from scratch, with minimal dependencies and fuss. The ultimate goal is to create an application that can answer questions based on both the data used to train the model and additional data provided dynamically. Each code sample provided in this post is a complete application, allowing developers to easily copy, paste, and run the code, experiment with it, and then incorporate it into their own applications for further refinement and customization.</p>



<h2 class="wp-block-heading" id="How-to-get-started-with-Semantic-Kernel-SDK.-Learn-how-to-use-it">How to get started with Semantic Kernel SDK. Learn how to use it</h2>



<p>To begin, make sure you have .NET 8 installed, and create a simple console app.dotnet </p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">new console -o chat-sample-app-01 --use-program-main 

cd chat-sample-app-01</pre>



<p>This creates a new directory chat-sample-app-01 and populates it with two files: chat-sample-app-01.csproj and Program.cs. We then need to bring in one NuGet package: Microsoft.SemanticKernel.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">dotnet add package Microsoft.SemanticKernel</pre>



<p><a href="https://learn.microsoft.com/en-us/semantic-kernel/overview/" target="_blank" rel="noopener">Find out more at Microsoft Learn</a></p>



<h3 class="wp-block-heading">Planning, orchestration, and multiple plugins</h3>



<p>Instead of referencing specific AI-related packages such as Azure.AI.OpenAI, I have opted for an open source <a href="https://learn.microsoft.com/en-us/semantic-kernel/overview/" target="_blank" rel="noopener">Semantic Kernel</a> kit to streamline various interactions and easily switch between different implementations for faster experimentation. Semantic Kernel offers a collection of libraries that simplify working with Large Language Models (LLMs) by providing abstractions for various AI concepts, allowing for the easy substitution of different implementations. It also includes many concrete implementations of these abstractions, wrapping numerous other SDKs, and offers support for planning, orchestration, and multiple plugins. This post will explore various aspects of Semantic Kernel, but my primary focus is on its abstractions.</p>



<p>While I have tried to keep dependencies to a minimum for the purpose of this article, there is one more I cannot avoid: you need access to an LLM. The easiest way to get access is via either OpenAI or Azure OpenAI. For this post, I am using Azure OpenAI. You will need three pieces of information for the remainder of the post:</p>



<ul class="wp-block-list">
<li>Your API key and endpoint provided to you in the Azure portal</li>



<li>A chat model, or to be more precise, the deployment name of your model. I use GPT-4-32k (0613), which as of this writing has a context window of 32K tokens. I’ll explain more about what it is later.</li>



<li>An embedding model. I use text-embedding-3-large.</li>
</ul>



<h3 class="wp-block-heading">Let&#8217;s make it as easy as possible</h3>



<p>With that out of the way, we can dive in. Believe it or not, we can create a simple chat app in just a few lines of code. Copy and paste this into your Program.cs:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">using Microsoft.SemanticKernel;

namespace chatSampleApp01
{
    class Program
    {
        static string deploymentName = Environment.GetEnvironmentVariable("AI:OpenAI:DeploymentName")!;
        static string endpoint = Environment.GetEnvironmentVariable("AI:OpenAI:Endpoint")!;
        static string apiKey = Environment.GetEnvironmentVariable("AI:OpenAI:APIKey")!;

        static async Task Main(string[] args)
        {
            //Initialize Semantic Kernel
            var builder = Kernel.CreateBuilder();
            builder.AddAzureOpenAIChatCompletion(deploymentName, endpoint, apiKey);
            
            var kernel = builder.Build();

            //Question and Answer loop
            string question;
            while (true)
            {
                Console.Write("Me: ");
                question = Console.ReadLine()!;
                Console.Write("Mine Copilot: ");
                Console.WriteLine(await kernel.InvokePromptAsync(question));
                Console.WriteLine();
            }
        }
    }
}</pre>



<p>To prevent accidentally revealing my API key, which should be safeguarded like a password, I have stored it in an environment variable and accessed it using GetEnvironmentVariable. Then I created a new kernel using the Semantic Kernel APIs and added an OpenAI chat completion service to it. The Microsoft.SemanticKernel package we imported earlier includes references to client support for both OpenAI and Azure OpenAI, eliminating the need for additional components to communicate with these services. With this configuration, we can now run our chat app using dotnet run, enter questions, and receive responses from the service.&nbsp;</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="1115" height="628" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_01.png" alt="semantic kernel" class="wp-image-28866" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 8" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_01.png 1115w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_01-300x169.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_01-768x433.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_01-495x279.png 495w" sizes="auto, (max-width: 1115px) 100vw, 1115px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>The expression await kernel.InvokePromptAsync(question) is the core of the interaction with the LLM, where it captures the user&#8217;s input and sends it to the LLM, receiving a string response in return. Semantic Kernel is equipped to handle various function types, including prompt functions for text-based AI interactions and standard .NET methods capable of executing any C# code. These functions can be triggered directly by the user, as shown in this example, or as part of a &#8220;plan&#8221; where a set of functions is provided to the LLM to formulate a strategy to achieve a specified objective. Semantic Kernel can execute these functions as per the plan (I will show it later). Additionally, some models support a &#8220;function calling&#8221; feature, which is also simplified by Semantic Kernel.</p>



<h2 class="wp-block-heading" id="Leveraging-Semantic-functions:-reusable-prompts,-dynamic-input-handling,-and-plugins">Leveraging Semantic functions: reusable prompts, dynamic input handling, and plugins</h2>



<p>In this instance, &#8220;function&#8221; refers to the user&#8217;s input, such as the question &#8220;What is Inetum Polska?&#8221; which is then processed by the LLM through the InvokePromptAsync method. To clarify the concept of &#8220;function,&#8221; we can extract it into a separate entity using the `CreateFunctionFromPrompt&#8220; method, allowing us to reuse the same function for multiple inputs. This approach eliminates the need to create a new function for each input, but requires a way to incorporate the user&#8217;s input into the existing function. Semantic Kernel supports this through prompt templates, which include placeholders that are filled with the appropriate variables and functions. For example, if the sample is run again with a request for the current time, the LLM will not be able to provide an answer:</p>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="1115" height="299" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_03.png" alt="semantic kernel" class="wp-image-28870" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 9" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_03.png 1115w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_03-300x80.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_03-768x206.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_03-495x133.png 495w" sizes="auto, (max-width: 1115px) 100vw, 1115px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>To anticipate such inquiries, we can equip the LLM with the necessary information within the prompt itself. I have registered a function with the kernel that provides the current date and time. Subsequently, I created a prompt function that utilizes a prompt template to invoke this time function during the prompt&#8217;s rendering. This template also incorporates the value of the $input variable. It is possible to pass any number of arguments with arbitrary names using a KernelArguments dictionary; in this case, I have chosen to name one &#8220;input&#8221;. Functions are organized into collections known as &#8220;plugins&#8221;.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">using Microsoft.SemanticKernel;

namespace chatSampleApp02
{
    class Program
    {
        static string deploymentName = Environment.GetEnvironmentVariable("AI:OpenAI:DeploymentName")!;
        static string endpoint = Environment.GetEnvironmentVariable("AI:OpenAI:Endpoint")!;
        static string apiKey = Environment.GetEnvironmentVariable("AI:OpenAI:APIKey")!;

        static async Task Main(string[] args)
        {
            //Initialize Semantic Kernel
            var builder = Kernel.CreateBuilder();
            builder.AddAzureOpenAIChatCompletion(deploymentName, endpoint, apiKey);

            // Create the prompt function as part of a plugin and add it to the kernel.
            builder.Plugins.AddFromFunctions(
                pluginName: "DateTimeHelpers",
                functions: [
                    KernelFunctionFactory.CreateFromMethod(
                        ()=> $"{DateTime.UtcNow:r}", "Now", "Gets the current date and time"
                    )
                ]);

            var kernel = builder.Build();

            var kernelFunction = KernelFunctionFactory.CreateFromPrompt(
                promptTemplate: @"
                    The current date and time is {{ datetimehelpers.now }}.
                    {{ $input }}"
                );

            //Question and Answer loop
            string question;
            while (true)
            {
                Console.Write("Me: ");
                question = Console.ReadLine()!;
                Console.Write("Mine Copilot: ");
                Console.WriteLine(await kernelFunction.InvokeAsync(kernel, new() { ["input"] = question }));
                Console.WriteLine();
            }
        }
    }
}
</pre>



<p>When the function is activated, it renders the prompt by calling the previously registered &#8216;Now&#8217; function and integrating its output into the prompt. Now, posing the same question yields a more comprehensive answer.&nbsp;</p>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="1115" height="367" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_04.png" alt="semantic kernel" class="wp-image-28873" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 10" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_04.png 1115w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_04-300x99.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_04-768x253.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_04-495x163.png 495w" sizes="auto, (max-width: 1115px) 100vw, 1115px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<h2 class="wp-block-heading" id="Does-LLM-have-memory?-How-to-use-Semantic-Kernel-to-overcome-statelessness-in-chat-agents">Does LLM have memory? How to use Semantic Kernel to overcome statelessness in chat agents&nbsp;</h2>



<p>We have made significant strides: with just a few lines of code, we have crafted a basic chat agent that can field repeated questions and provide responses. Moreover, we have managed to furnish it with extra prompt information to aid in answering questions it would otherwise be unable to tackle. Yet, in doing so, we have also fashioned a chat agent devoid of memory, lacking any awareness of prior conversations:&nbsp;</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="1115" height="894" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_02.png" alt="semantic kernel" class="wp-image-28876" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 11" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_02.png 1115w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_02-300x241.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_02-768x616.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_02-493x395.png 493w" sizes="auto, (max-width: 1115px) 100vw, 1115px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>To remedy the statelessness of LLMs and their lack of memory, we must maintain a record of our chat history and incorporate it into each prompt request. This can be done manually by integrating the chat history into the prompt, or we can rely on Semantic Kernel to handle it for us, which in turn can depend on the clients for Azure OpenAI, OpenAI, or any other chat service. The latter approach involves using the registered IChatCompletionService to create a new chat, which is essentially a compilation of all messages. This method not only processes requests and outputs responses but also archives them into the chat history.&nbsp;</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;

namespace chatSampleApp03
{
    class Program
    {
        static string deploymentName = Environment.GetEnvironmentVariable("AI:OpenAI:DeploymentName")!;
        static string endpoint = Environment.GetEnvironmentVariable("AI:OpenAI:Endpoint")!;
        static string apiKey = Environment.GetEnvironmentVariable("AI:OpenAI:APIKey")!;

        static async Task Main(string[] args)
        {
            //Initialize Semantic Kernel
            var builder = Kernel.CreateBuilder();
            builder.AddAzureOpenAIChatCompletion(deploymentName, endpoint, apiKey);
            var kernel = builder.Build();

            //Create new chat
            var chatService = kernel.GetRequiredService&lt;IChatCompletionService>();
            var chat = new ChatHistory(
                    systemMessage: "You are an AI assistant that helps people find information."
                );

            //Question and Answer loop
            string question;
            while (true)
            {
                Console.Write("Me: ");
                question = Console.ReadLine()!;
                chat.AddUserMessage(question);

                Console.Write("Mine Copilot: ");
                var answer = await chatService.GetChatMessageContentAsync(chat);
                chat.AddAssistantMessage(answer.Content!);
                Console.WriteLine(answer);
                
                Console.WriteLine();
            }
        }
    }
}</pre>



<p>With that chat history rendered into an appropriate prompt, we then get back much more satisfying results:&nbsp;</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="1115" height="789" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_05.png" alt="semantic kernel" class="wp-image-28879" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 12" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_05.png 1115w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_05-300x212.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_05-768x543.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_05-495x350.png 495w" sizes="auto, (max-width: 1115px) 100vw, 1115px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>In a practical application, it is crucial to consider various additional factors, such as the data processing limitations of language models, referred to as the &#8220;context window.&#8221; The `GPT-4-32k (0613) model that I am using here can handle ~32000 tokens, where a token can be a full word, part of a word, or a single character. Additionally, each token incurs a cost for every interaction. Therefore, when transitioning from a trial phase to full production, it becomes essential to monitor the chat history&#8217;s data volume closely and manage it by removing unnecessary parts, etc.</p>



<p>We can enhance the user experience by adding a small segment of code that accelerates the interaction. These large language models (LLMs) generate responses by predicting the next token, so while we have been displaying the complete response once it is fully generated, we can actually present it in real time as it is being formulated. This functionality is available in Semantic Kernel through IAsyncEnumerable, which allows for convenient integration using await foreach loops to stream the response incrementally.</p>



<figure class="wp-block-video"><video controls src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_06.mp4"></video></figure>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<h2 class="wp-block-heading" id="Does-LLM-know-everything?-Connectors-and-RAG-with-Semantic-plugins-and-native-functions">Does LLM know everything? Connectors and RAG with Semantic plugins and native functions</h2>



<p>We have now reached a point where we can pose questions and receive answers, maintain a record of these exchanges to refine future responses, and even broadcast our findings. But is our work complete? Not quite.</p>



<p>As it stands, the only information available to LLM for providing answers is the data it was initially trained on, plus any additional information we explicitly include in the prompt (like</p>



<p>the current time, as previously mentioned). Consequently, if we inquire about topics outside the LLM&#8217;s training or areas where its knowledge is lacking, the responses we receive may be unhelpful, misleading, or entirely incorrect, which are often referred to as &#8216;hallucinations&#8217;.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="1115" height="334" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_07.png" alt="semantic kernel" class="wp-image-28882" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 13" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_07.png 1115w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_07-300x90.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_07-768x230.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_07-495x148.png 495w" sizes="auto, (max-width: 1115px) 100vw, 1115px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>The question is about the latest C# 12 changes, which were released after this version of the GPT-4-32k (0613) model was released (November 2023 vs October 2021). The model has no information about the newest capabilities, so it does not give a reasonable answer to the first question. We need to find a way to teach it about the things the user is asking about.</p>



<p>We know the way to teach LLM: include the necessary information in the prompt. For instance, the Microsoft Learn articles:</p>



<ul class="wp-block-list">
<li><a href="https://github.com/dotnet/docs/blob/main/docs/csharp/whats-new/relationships-between-language-and-library.md" target="_blank" rel="noreferrer noopener">Relationships between language features and library types</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/docs/blob/main/docs/csharp/whats-new/version-update-considerations.md" target="_blank" rel="noreferrer noopener">Version and update considerations for C# developers</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/docs/blob/main/docs/csharp/whats-new/csharp-version-history.md" target="_blank" rel="noreferrer noopener">The history of C#</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/docs/blob/main/docs/csharp/whats-new/csharp-11.md" target="_blank" rel="noreferrer noopener">What&#8217;s new in C# 11</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/roslyn/blob/main/docs/compilers/CSharp/Compiler+Breaking+Changes+-+DotNet+7.md" target="_blank" rel="noreferrer noopener">Breaking changes in Roslyn after .NET 6 all the way to .NET 7</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/docs/blob/main/docs/csharp/whats-new/csharp-12.md" target="_blank" rel="noreferrer noopener">What&#8217;s new in C# 12</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/roslyn/blob/main/docs/compilers/CSharp/Compiler+Breaking+Changes+-+DotNet+8.md" target="_blank" rel="noreferrer noopener">Breaking changes in Roslyn after .NET 7 all the way to .NET 8</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/docs/blob/main/docs/csharp/whats-new/csharp-13.md" target="_blank" rel="noreferrer noopener">What&#8217;s new in C# 13</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/roslyn/blob/main/docs/compilers/CSharp/Compiler+Breaking+Changes+-+DotNet+9.md" target="_blank" rel="noreferrer noopener">Breaking changes in Roslyn after .NET 8 all the way to .NET 9</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/docs/blob/main/docs/core/whats-new/dotnet-8/overview.md" target="_blank" rel="noreferrer noopener">What&#8217;s new in .NET 8</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/docs/blob/main/docs/core/whats-new/dotnet-8/runtime.md" target="_blank" rel="noreferrer noopener">What&#8217;s new in the .NET 8 runtime</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/docs/blob/main/docs/core/whats-new/dotnet-8/sdk.md" target="_blank" rel="noreferrer noopener">What&#8217;s new in the SDK and tooling for .NET 8</a>&nbsp;</li>



<li><a href="https://github.com/dotnet/docs/blob/main/docs/core/whats-new/dotnet-8/containers.md" target="_blank" rel="noreferrer noopener">What&#8217;s new in containers for .NET 8</a>&nbsp;</li>
</ul>



<p>which were published after the training of this GPT-4-32k (0613) model have a detailed section regarding the new capabilities of C# 12. By incorporating this content into the prompt, we can supply the LLM with the required knowledge. In the following example, I have expanded the previous code to download the web page content and then insert it into a user message.</p>



<p>This approach ensures that the LLM is provided with the latest information to assist with the query.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using System.Text;

namespace chatSampleApp05
{
    internal class Program
    {
        static string deploymentName = Environment.GetEnvironmentVariable("AI:OpenAI:DeploymentName")!;
        static string endpoint = Environment.GetEnvironmentVariable("AI:OpenAI:Endpoint")!;
        static string apiKey = Environment.GetEnvironmentVariable("AI:OpenAI:APIKey")!;

        static async Task Main(string[] args)
        {
            //Initialize Semantic Kernel
            var builder = Kernel.CreateBuilder();
            builder.AddAzureOpenAIChatCompletion(deploymentName, endpoint, apiKey);
            var kernel = builder.Build();

            //Create new chat
            var chatService = kernel.GetRequiredService&lt;IChatCompletionService>();
            var chat = new ChatHistory(
                    systemMessage: "You are an AI assistant that helps people find information."
                );

            // Download a documents and add all of its contents to our chat
            var articleList = new List&lt;string>
            {
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/relationships-between-language-and-library.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/version-update-considerations.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-version-history.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-11.md",
                "https://raw.githubusercontent.com/dotnet/roslyn/main/docs/compilers/CSharp/Compiler%20Breaking%20Changes%20-%20DotNet%207.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-12.md",
                "https://raw.githubusercontent.com/dotnet/roslyn/main/docs/compilers/CSharp/Compiler%20Breaking%20Changes%20-%20DotNet%208.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-13.md",
                "https://raw.githubusercontent.com/dotnet/roslyn/main/docs/compilers/CSharp/Compiler%20Breaking%20Changes%20-%20DotNet%209.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/overview.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/runtime.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/sdk.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/containers.md"

            };
            
            var articleStringBuilder = new StringBuilder();

            using (var httpClient = new HttpClient())
            {
                foreach (var article in articleList) {
                    articleStringBuilder.Append(await httpClient.GetStringAsync(article));
                }

                chat.AddUserMessage($"Here's some additional information: {articleStringBuilder.ToString()}");
            }

            string question;
            StringBuilder stringBuilder = new StringBuilder();

            //Question and Answer loop
            while (true)
            {
                Console.Write("Me: ");
                question = Console.ReadLine()!;
                chat.AddUserMessage(question);

                stringBuilder.Clear();
                Console.Write("Mine Copilot: ");

                await foreach (var message in chatService.GetStreamingChatMessageContentsAsync(chat))
                {
                    Console.Write(message);
                    stringBuilder.Append(message.Content);
                }
                Console.WriteLine();
                chat.AddAssistantMessage(stringBuilder.ToString());
                Console.WriteLine();
            }
        }
    }
}</pre>



<p>and the result?</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="1115" height="362" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_08.png" alt="semantic kernel" class="wp-image-28885" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 14" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_08.png 1115w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_08-300x97.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_08-768x249.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_08-495x161.png 495w" sizes="auto, (max-width: 1115px) 100vw, 1115px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>We have gone over the context window almost twice without adding any history to the conversation. We obviously need to include less information, but still need to ensure it is relevant information. RAG will help us&#8230;&nbsp;</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td></style><div class="promotion-box promotion-box--image-left "><div class="tiles latest-news-once"><div class="tile"><div class="tile-image"><img decoding="async" src="https://nearshore-it.eu/wp-content/uploads/2024/07/Marek-Dobkowski-1-bezloczkowy-kwadrat.jpg" alt="Marek Dobkowski 1 bezloczkowy kwadrat" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 15"></div><div class="tile-content"><p class="entry-title client-name promotion-box__headline2">Interested in building AI-powered apps?</p>
<p class="promotion-box__description2">Connect with <strong>Marek Dobkowski</strong> to explore Microsoft solutions that simplify your workflow and enhance productivity.</p>
<a class="btn btn-primary booking" href="https://outlook.office365.com/book/BookameetingwithMarek1@gfi.fr/" target="_blank" rel="noopener">Schedule a meeting</a>






</div></div></div></div></td></tr></tbody></table></figure>



<h3 class="wp-block-heading">Using RAG</h3>



<p><strong>R</strong>etrieval <strong>A</strong>ugmented <strong>G</strong>eneration (RAG) essentially means looking up relevant information and incorporating it into the prompt. Instead of including all possible information in the prompt, we index the additional information we care about. When a question is asked, we use that question to find the most relevant indexed content and add just that specific content to the prompt. To facilitate this process, we need embeddings.&nbsp;</p>



<p>An embedding can be thought of as a vector (array) of floating-point values that represents the content and its semantic meaning. We can use a model specifically designed for embeddings to generate such a vector for a given input, and then store both the vector and the original text in a database. Later, when a question is posed, we can process that question through the same model to produce a vector, which we then use to find the most relevant embeddings in our database. We are not necessarily looking for exact matches, but rather for sufficiently similar ones. The term ‘close’ here is quite literal, as the lookups are typically performed using distance measures like cosine similarity. For instance, consider this program:&nbsp;</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Embeddings;
using System.Numerics.Tensors;

namespace chatSampleApp06
{
    internal class Program
    {
        static string embeddingDeploymentName = Environment.GetEnvironmentVariable("AI:OpenAI:EmbeddingDeploymentName")!;
        static string endpoint = Environment.GetEnvironmentVariable("AI:OpenAI:Endpoint")!;
        static string apiKey = Environment.GetEnvironmentVariable("AI:OpenAI:APIKey")!;

        static async Task Main(string[] args)
        {
            //Experimental
            #pragma warning disable SKEXP0010,SKEXP0001

            //Initialize Semantic Kernel
            var builder = Kernel.CreateBuilder();

            builder.AddAzureOpenAITextEmbeddingGeneration(embeddingDeploymentName, endpoint, apiKey);

            var kernel = builder.Build();

            var input = "What is a reptile?";
            var examples = new string[]
            {
                "What is a reptile?",
                "¿Qué es un reptil?",
                "Was ist ein Reptil?",
                "A turtle is a reptile.",
                "Eidechse ist ein Beispiel für Reptilien.",
                "Crocodiles, lizards, snakes, and turtles are all examples.",
                "A frog is green.",
                "A grass is green.",
                "A cat is a mammal.",
                "A dog is a man's best friend.",
                "My best friend is Mike.",
                "I'm working at Inetum Polska since 2013."
            };

            // Generate embeddings for each piece of text
            var embeddingGenerator = kernel.GetRequiredService&lt;ITextEmbeddingGenerationService>();
            var inputEmbedding = (await embeddingGenerator.GenerateEmbeddingsAsync([input])).First();

            var exampleEmbeddings = (await embeddingGenerator.GenerateEmbeddingsAsync(examples)).ToArray();
            var similarities = new List&lt;Tuple&lt;float, string>>();

            // Print the cosine similarity between the input and each example
            for (int i = 0; i &lt; exampleEmbeddings.Length; i++)
            {
                similarities.Add(
                    new Tuple&lt;float, string>(
                        TensorPrimitives.CosineSimilarity(exampleEmbeddings[i].Span, inputEmbedding.Span), 
                        examples[i]));
            }

            similarities.Sort((x,y) => y.Item1.CompareTo(x.Item1));

            Console.WriteLine("Similarity\tExample");
            foreach (var similarity in similarities) {
                Console.WriteLine($"{similarity.Item1:F6}\t{similarity.Item2}");
            }
            Console.ReadLine();
        }
    }
}</pre>



<p>This process utilizes the AzureOpenAI embedding generation service to obtain an embedding vector (using the text-embedding-3-large model mentioned earlier in the post) for both an input and several other pieces of text. It then compares the resulting embedding for the input with the embeddings of those other texts, sorts the results based on similarity, and prints them out.&nbsp;</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="1115" height="390" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_09.png" alt="semantic kernel" class="wp-image-28888" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 16" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_09.png 1115w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_09-300x105.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_09-768x269.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_09-495x173.png 495w" sizes="auto, (max-width: 1115px) 100vw, 1115px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>Let&#8217;s incorporate this concept into the chat app. In this round, I have augmented the previous chat example with a few things:&nbsp;</p>



<ul class="wp-block-list">
<li>In order for Semantic Kernel to handle the embedding generation through its abstractions, we need to include its Memory package. Please note the &#8211;prerelease flag, as this is an evolving area. While some Semantic Kernel components are stable, others are still in development and therefore marked as prerelease.&nbsp;</li>
</ul>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">dotnet add package Microsoft.SemanticKernel.Plugins.Memory --prerelease </pre>



<ul class="wp-block-list">
<li>Next, I need to create an ISemanticTextMemory for querying. I achieved this by using MemoryBuilder to combine an embeddings generator with a database. I specified the Azure OpenAI service as my embeddings generator using the WithAzureTextEmbeddingGenerationService method. For the store, I registered a VolatileMemoryStore instance using the WithMemoryStore method. Although we will change this later, it will suffice for now. VolatileMemoryStore is essentially an implementation of Semantic Kernel&#8217;s IMemoryStore abstraction that wraps an in-memory dictionary.&nbsp;</li>



<li>I downloaded the text and used Semantic Kernel&#8217;s TextChunker to break it into pieces. Then, I saved each piece to the memory store using `SaveInformationAsync&#8220;. This process generates an embedding for the text and stores the resulting vector along with the input text in the dictionary.&nbsp;</li>



<li>When it is time to ask a question, instead of just adding the question to the chat history and submitting it, we first use the question to perform a SearchAsync on the memory store. This generates an embedding vector for the question and searches the store for the closest vectors. I have it return the three closest matches, append the associated text together, add the results to the chat history, and submit it. After submitting the request, I remove this additional context from the chat history to avoid sending it again in subsequent requests, as it can consume much of the allowed context window.&nbsp;</li>
</ul>



<p>And full source code&#8230;&nbsp;</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.OpenAI;
using Microsoft.SemanticKernel.Memory;
using Microsoft.SemanticKernel.Text;
using System.Net;
using System.Text;
using System.Text.RegularExpressions;

namespace chatSampleApp07
{
    internal class Program
    {
        static string deploymentName = Environment.GetEnvironmentVariable("AI:OpenAI:DeploymentName")!;
        static string embeddingDeploymentName = Environment.GetEnvironmentVariable("AI:OpenAI:EmbeddingDeploymentName")!;
        static string endpoint = Environment.GetEnvironmentVariable("AI:OpenAI:Endpoint")!;
        static string apiKey = Environment.GetEnvironmentVariable("AI:OpenAI:APIKey")!;

        static async Task Main(string[] args)
        {
            //Experimental
            #pragma warning disable SKEXP0001, SKEXP0010, SKEXP0050

            //Initialize Semantic Kernel
            var builder = Kernel.CreateBuilder();
            builder.AddAzureOpenAIChatCompletion(deploymentName, endpoint, apiKey);
            var kernel = builder.Build();

            //Initialize Memory Builder
            var memoryBuilder = new MemoryBuilder()
                .WithMemoryStore(new VolatileMemoryStore())
                .WithAzureOpenAITextEmbeddingGeneration(
                    deploymentName: embeddingDeploymentName,
                    endpoint: endpoint,
                    apiKey: apiKey
                );

            var memory = memoryBuilder.Build();

            // Download a documents and add all of its contents to our chat
            var articleList = new List&lt;string>
            {
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/relationships-between-language-and-library.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/version-update-considerations.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-version-history.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-11.md",
                "https://raw.githubusercontent.com/dotnet/roslyn/main/docs/compilers/CSharp/Compiler%20Breaking%20Changes%20-%20DotNet%207.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-12.md",
                "https://raw.githubusercontent.com/dotnet/roslyn/main/docs/compilers/CSharp/Compiler%20Breaking%20Changes%20-%20DotNet%208.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-13.md",
                "https://raw.githubusercontent.com/dotnet/roslyn/main/docs/compilers/CSharp/Compiler%20Breaking%20Changes%20-%20DotNet%209.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/overview.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/runtime.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/sdk.md",
                "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/containers.md"
            };

            var collectionName = "microsoft-news";

            using (var httpClient = new HttpClient())
            {
                var allParagraphs = new List&lt;string>();

                foreach (var article in articleList)
                {
                    var content = await httpClient.GetStringAsync(article);
                    var lines = TextChunker.SplitPlainTextLines(content, 64);
                    var paragraphs = TextChunker.SplitPlainTextParagraphs(lines, 512);

                    allParagraphs.AddRange(paragraphs);
                }

                for (var i = 0; i &lt; allParagraphs.Count; i++)
                {
                    await memory.SaveInformationAsync(collectionName, allParagraphs[i], $"paragraph[{i}]");
                }
            }

            //Create new chat
            var chatService = kernel.GetRequiredService&lt;IChatCompletionService>();
            var chat = new ChatHistory(
                    systemMessage: "You are an AI assistant that helps people find information."
                );

            string question;
            var responseBuilder = new StringBuilder();
            var contextBuilder = new StringBuilder();

            //Question and Answer loop
            while (true)
            {
                Console.Write("Me: ");
                question = Console.ReadLine()!;

                await foreach (var result in memory.SearchAsync(collectionName, question, limit: 3))
                {
                    contextBuilder.AppendLine(result.Metadata.Text);
                }

                var contextToRemove = -1;
                if (contextBuilder.Length > 0)
                {
                    contextBuilder.Insert(0, "Here's some additional information: ");
                    contextToRemove = chat.Count;
                    chat.AddUserMessage(contextBuilder.ToString());
                }

                chat.AddUserMessage(question);

                responseBuilder.Clear();
                Console.Write("Mine Copilot: ");

                await foreach (var message in chatService.GetStreamingChatMessageContentsAsync(chat,null, kernel))
                {
                    Console.Write(message);
                    responseBuilder.Append(message.Content);
                }

                Console.WriteLine();
                chat.AddAssistantMessage(responseBuilder.ToString());

                if (contextToRemove >= 0)
                {
                    chat.RemoveAt(contextToRemove);
                }

                Console.WriteLine();
            }
        }
    }
}</pre>



<p>The text chunking code divided the documents into 104 &#8220;paragraphs&#8221; resulting in 104 embeddings being created and stored in the database. The exciting part is that with all these embeddings, when we pose our question, the database retrieves the most relevant material and adds the additional text to the prompt. Now, when we ask the same questions as before, we receive a much more helpful and accurate response:</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="1115" height="570" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_10.png" alt="semantic kernel" class="wp-image-28891" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 17" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_10.png 1115w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_10-300x153.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_10-768x393.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_10-495x253.png 495w" sizes="auto, (max-width: 1115px) 100vw, 1115px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<h2 class="wp-block-heading" id="Storing-memories">Storing memories</h2>



<p>Naturally, we do not want to reindex all documents every time the application starts. Imagine this was a public website facilitating chats with thousands of users and hundreds of documents reindexing all content each time – the application restart process would not only be time-consuming but also unnecessarily expensive. For instance, the Azure OpenAI embedding model I use costs €0.000121 per 1,000 tokens (Azure OpenAI Service pricing), meaning indexing just those documents costs a couple of cents (but remember: &#8220;scale makes a difference&#8221;). </p>



<p>Therefore, we should switch to using persistent storage. Semantic Kernel provides various IMemoryStore implementations, and we can easily switch to one that persists in the results. For example, let&#8217;s switch to one based on Sqlite. To do this, we need another NuGet package:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">dotnet add package Microsoft.SemanticKernel.Connectors.Sqlite --prerelease</pre>



<p>and with that, we can change just one line of code to switch from the VolatileMemoryStore:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">.WithMemoryStore(new VolatileMemoryStore())</pre>



<p>to the SqliteMemoryStore:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">.WithMemoryStore(await SqliteMemoryStore.ConnectAsync("data\\rag-data.db"))</pre>



<p>Sqlite is an embedded SQL database engine that operates within the same process and stores its data in standard disk files. In this case, it will connect to a rag-data.db file, creating it if it does not already exist. However, if we were to run this, we would still end up generating the embeddings again, as our previous example did not include a check to see if the data already existed. Therefore, our final step is to add a guard to prevent this redundant work.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">var collectionName = "microsoft-news";
var collections = await memory.GetCollectionsAsync();

if (!collections.Contains(collectionName))
{                       
    ... // same code as before to download and process the documents
}
else
{
    Console.WriteLine($"Found '{collectionName}' in RAG database");
}</pre>



<p>You get the idea. Here is the complete version using Sqlite:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.OpenAI;
using Microsoft.SemanticKernel.Connectors.Sqlite;
using Microsoft.SemanticKernel.Memory;
using Microsoft.SemanticKernel.Text;
using System.Text;

namespace chatSampleApp08
{
    internal class Program
    {
        static string deploymentName = Environment.GetEnvironmentVariable("AI:OpenAI:DeploymentName")!;
        static string embeddingDeploymentName = Environment.GetEnvironmentVariable("AI:OpenAI:EmbeddingDeploymentName")!;
        static string endpoint = Environment.GetEnvironmentVariable("AI:OpenAI:Endpoint")!;
        static string apiKey = Environment.GetEnvironmentVariable("AI:OpenAI:APIKey")!;

        static async Task Main(string[] args)
        {
            //Experimental
            #pragma warning disable SKEXP0001, SKEXP0010, SKEXP0020, SKEXP0050

            //Initialize Semantic Kernel
            var builder = Kernel.CreateBuilder();
            builder.AddAzureOpenAIChatCompletion(deploymentName, endpoint, apiKey);
            var kernel = builder.Build();

            //Initialize Memory Builder
            var memoryBuilder = new MemoryBuilder()
                .WithMemoryStore(await SqliteMemoryStore.ConnectAsync("data\\rag-data.db"))
                .WithAzureOpenAITextEmbeddingGeneration(
                    deploymentName: embeddingDeploymentName,
                    endpoint: endpoint,
                    apiKey: apiKey
                );

            var memory = memoryBuilder.Build();

            var collectionName = "microsoft-news";
            var collections = await memory.GetCollectionsAsync();
            
            if (!collections.Contains(collectionName))
            {                       
                // Download a documents and add all of its contents to our chat
                var articleList = new List&lt;string>
                {
                    "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/relationships-between-language-and-library.md",
                    "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/version-update-considerations.md",
                    "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-version-history.md",
                    "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-11.md",
                    "https://raw.githubusercontent.com/dotnet/roslyn/main/docs/compilers/CSharp/Compiler%20Breaking%20Changes%20-%20DotNet%207.md",
                    "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-12.md",
                    "https://raw.githubusercontent.com/dotnet/roslyn/main/docs/compilers/CSharp/Compiler%20Breaking%20Changes%20-%20DotNet%208.md",
                    "https://raw.githubusercontent.com/dotnet/docs/main/docs/csharp/whats-new/csharp-13.md",
                    "https://raw.githubusercontent.com/dotnet/roslyn/main/docs/compilers/CSharp/Compiler%20Breaking%20Changes%20-%20DotNet%209.md",
                    "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/overview.md",
                    "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/runtime.md",
                    "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/sdk.md",
                    "https://raw.githubusercontent.com/dotnet/docs/main/docs/core/whats-new/dotnet-8/containers.md"
                };

                using (var httpClient = new HttpClient())
                {
                    var allParagraphs = new List&lt;string>();

                    foreach (var article in articleList)
                    {
                        var content = await httpClient.GetStringAsync(article);
                        var lines = TextChunker.SplitPlainTextLines(content, 64);
                        var paragraphs = TextChunker.SplitPlainTextParagraphs(lines, 512);

                        allParagraphs.AddRange(paragraphs);
                    }

                    for (var i = 0; i &lt; allParagraphs.Count; i++)
                    {
                        await memory.SaveInformationAsync(collectionName, allParagraphs[i], $"paragraph[{i}]");
                    }
                }
            }
            else
            {
                Console.WriteLine($"Found '{collectionName}' in RAG database");
            }

            //Create new chat
            var chatService = kernel.GetRequiredService&lt;IChatCompletionService>();
            var chat = new ChatHistory(
                    systemMessage: "You are an AI assistant that helps people find information."
                );

            string question;
            var responseBuilder = new StringBuilder();
            var contextBuilder = new StringBuilder();

            //Question and Answer loop
            while (true)
            {
                Console.Write("Me: ");
                question = Console.ReadLine()!;

                await foreach (var result in memory.SearchAsync(collectionName, question, limit: 3))
                {
                    contextBuilder.AppendLine(result.Metadata.Text);
                }

                var contextToRemove = -1;
                if (contextBuilder.Length > 0)
                {
                    contextBuilder.Insert(0, "Here's some additional information: ");
                    contextToRemove = chat.Count;
                    chat.AddUserMessage(contextBuilder.ToString());
                }

                chat.AddUserMessage(question);

                responseBuilder.Clear();
                Console.Write("Mine Copilot: ");

                await foreach (var message in chatService.GetStreamingChatMessageContentsAsync(chat, null, kernel))
                {
                    Console.Write(message);
                    responseBuilder.Append(message.Content);
                }

                Console.WriteLine();
                chat.AddAssistantMessage(responseBuilder.ToString());

                if (contextToRemove >= 0)
                {
                    chat.RemoveAt(contextToRemove);
                }

                Console.WriteLine();
            }
        }
    }
}</pre>



<p>Now, when we run it, the first invocation will still index everything, but after that, the data will already be indexed:</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="620" height="75" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_11.png" alt="semantic kernel" class="wp-image-28894" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 18" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_11.png 620w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_11-300x36.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_11-495x60.png 495w" sizes="auto, (max-width: 620px) 100vw, 620px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>and subsequent invocations are able to simply use it.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="592" height="191" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_12.png" alt="semantic kernel" class="wp-image-28897" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 19" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_12.png 592w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_12-300x97.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_12-495x160.png 495w" sizes="auto, (max-width: 592px) 100vw, 592px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>While Sqlite is a fantastic tool, it is not specifically optimized for performing these types of searches. In fact, the code for this SqliteMemoryStore in SK merely enumerates the entire database and performs a `CosineSimilarity&#8220; check on each entry.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">// from: https://github.com/microsoft/semantic-kernel/blob/9264b3e0b42e184b7e9e8b2a073d8a721c4af92a/dotnet/src/Connectors/Connectors.Memory.Sqlite/SqliteMemoryStore.cs#L135

await foreach (var record in this.GetAllAsync(collectionName, cancellationToken).ConfigureAwait(false))
{
    if (record is not null)
    {
        double similarity = TensorPrimitives.CosineSimilarity(embedding.Span, record.Embedding.Span);
        ...
    }
}</pre>



<p>For real scale and the ability to share data across multiple frontends, we need a dedicated &#8216;vector database&#8217; designed for storing and searching embeddings. There are many such vector databases available now, including Azure AI Search, Chroma, Milvus, Pinecone,</p>



<p>Qdrant, Weaviate, and many more&#8230; We can easily set one of these up, change our `WithMemoryStore&#8220; call to use the appropriate connector, and we are ready to go. Let&#8217;s proceed with that. For this example, I have chosen Azure AI Search.</p>



<p>I add the relevant Semantic Kernel &#8220;connector&#8221; to my project:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">dotnet add package Microsoft.SemanticKernel.Connectors.AzureAISerach --prerelease </pre>



<p>and then add a couple of lines:&nbsp;</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">static string azureAISearchEndpoint = Environment.GetEnvironmentVariable("AI:AzureAISearch:Endpoint")!;
static string azureAISearchApiKey = Environment.GetEnvironmentVariable("AI:AzureAISearch:APIKey")!;</pre>



<p>change from:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">.WithMemoryStore(await SqliteMemoryStore.ConnectAsync("data\\rag-data.db"))</pre>



<p>to:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">.WithMemoryStore(new AzureAISearchMemoryStore(azureAISearchEndpoint, azureAISearchApiKey)) </pre>



<p>And that&#8217;s it! The application works as before but much faster.&nbsp;&nbsp;</p>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" decoding="async" width="1296" height="821" src="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_13-1296x821.png" alt="semantic kernel" class="wp-image-28900" title="Building a chat app with AI in 15 Minutes: Leveraging Semantic Kernel, plugins and RAG 20" srcset="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_13-1296x821.png 1296w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_13-300x190.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_13-768x487.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_13-495x314.png 495w, https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_13.png 1321w" sizes="auto, (max-width: 1296px) 100vw, 1296px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<h2 class="wp-block-heading" id="Enhancing-your-Semantic-integration:-use-the-Semantic-best-practices-and-considerations">Enhancing your Semantic integration: use the Semantic best practices and considerations</h2>



<p>Wow! Clearly, I have left out many crucial details that any real application would need to address. For instance, how should the data being indexed be cleaned, normalized, and chunked? How should errors be managed? How can we limit the amount of data sent with each request, such as restricting chat history or the size of the found embeddings? How to make the application more secure (API Key vs Managed Identity)? Which service is the best for storing all the information? </p>



<p>And there are many other considerations, including making the UI much more attractive than my basic Console.WriteLine calls. Despite these missing details, I hope it is evident that you can start integrating this kind of functionality into your applications right away.</p>



<p><strong>Also read:</strong><a href="https://nearshore-it.eu/technologies/python-pandas-tutorial-check-our-complete-introduction-to-pandas/"><strong> </strong>Introduction to Python Pandas Libraries</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://nearshore-it.eu/articles/create-ai-chat-with-semantic-kernel/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure url="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_06.mp4" length="24632603" type="video/mp4" />

		<media:content url="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_06.mp4" medium="video" width="1116" height="876">
			<media:player url="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_screen_06.mp4" />
			<media:title type="plain">Cloud engineering Archives - Nearshore Software Development Company - IT Outsourcing Services</media:title>
			<media:thumbnail url="https://nearshore-it.eu/wp-content/uploads/2024/08/nearshore_2024.08.23_cover-1.jpg" />
			<media:rating scheme="urn:simple">nonadult</media:rating>
		</media:content>
	</item>
		<item>
		<title>Is a serverless Azure SQL Database cost-effective? Auto-scaling Azure SQL DB</title>
		<link>https://nearshore-it.eu/articles/auto-scaling-azure-sql-database/</link>
					<comments>https://nearshore-it.eu/articles/auto-scaling-azure-sql-database/#respond</comments>
		
		<dc:creator><![CDATA[Marek Dobkowski]]></dc:creator>
		<pubDate>Fri, 26 Jul 2024 13:46:17 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[Technologies]]></category>
		<category><![CDATA[Cloud engineering]]></category>
		<guid isPermaLink="false">https://nearshore-it.eu/?p=28282</guid>

					<description><![CDATA[How can you easily scale up the Azure SQL Database and reduce costs? Using the portal, you can meet all the performance requirements and save money. Learn how!]]></description>
										<content:encoded><![CDATA[
<p>In the world of data management, the cost of maintaining a database can be a significant concern for many businesses. The Serverless Azure SQL Database has emerged as a potential solution, promising a cost-effective approach to database management without compromising on performance or scalability. </p>



<p>How can Azure DB be used to its highest potential?  How can you easily scale up the Azure SQL Database and reduce costs? By using the portal, you can meet all the performance requirements and save money at the same time. Read on to learn how! </p>



<div class="table-of-contents">
    <p class="title"></p>
    <ol>
                    <li><a href="#Serverless-Azure-SQL-Database:-key-features">1.  Serverless Azure SQL Database: key features</a></li>
                    <li><a href="#Choosing-the-right-purchasing-model-for-Azure-SQL-Database-">2.  Choosing the right purchasing model for Azure SQL Database</a></li>
                    <li><a href="#Serverless-vs-provisioned-–-key-differences">3.  Serverless vs provisioned – key difference</a></li>
                    <li><a href="#Reducing-the-cost-of-Azure-SQL-Database.-Real-world-example">4.  Reducing the cost of Azure SQL Database. Real-world example</a></li>
                    <li><a href="#Step-by-step-tutorial.-Automating-Azure-SQL-Database-scaling-based-on-CPU-usage">5.  Step-by-step tutorial. Automating Azure SQL Database scaling based on CPU usage </a></li>
                    <li><a href="#Summary ">6.  Summary</a></li>
            </ol>
</div>


<h2 class="wp-block-heading" id="Serverless-Azure-SQL-Database:-key-features">The Serverless Azure SQL Database: key features</h2>



<p>Serverless Azure SQL Database offers a multitude of features and benefits, including automatic pause and resume capabilities, built-in high availability, and robust security measures. It is designed to simplify performance management and reduce costs, as you only pay for the computing resources you use.&nbsp;</p>



<p>However, there are also potential drawbacks to consider. The auto-pause feature, while cost-effective, can lead to a delay in database availability, which might not be suitable for applications requiring instant access. Additionally, while the serverless model can handle fluctuating workloads, it might not be the best fit for workloads with consistent high-performance needs, as the scaling process can introduce a slight latency.&nbsp;</p>



<h2 class="wp-block-heading" id="Choosing-the-right-purchasing-model-for-Azure-SQL-Database-">Choosing the right purchasing model for the Azure SQL Database </h2>



<p>The Azure SQL Database offers a fully managed platform as a service (PaaS) database engine that aligns with your specific performance and budgetary requirements. When selecting a purchasing model for the Azure SQL Database, based on your chosen deployment model, you have the following options: </p>



<ul class="wp-block-list">
<li><strong>Virtual core (vCore)-based purchasing model</strong>: This model allows you to choose between provisioned or serverless compute tiers:&nbsp;
<ul class="wp-block-list">
<li>In the <strong>provisioned compute tier</strong>, you have the ability to allocate a fixed amount of compute resources dedicated to your workload. </li>



<li>The <strong>serverless compute tier offers</strong> flexibility in terms of autoscaling compute resources within a set range and includes cost-saving measures by pausing databases during inactivity (billing only for storage) and resuming them when activity picks up. The cost per vCore unit is more economical in the provisioned compute tier compared to the serverless tier. </li>
</ul>
</li>



<li><strong>Database transaction unit (DTU)-based purchasing model</strong>: This model offers combined compute and storage packages, which are optimized for typical workloads.&nbsp;</li>
</ul>



<p>Find out more about them at the official <a href="https://learn.microsoft.com/en-us/azure/azure-sql/database/purchasing-models?view=azuresql" target="_blank" rel="noreferrer noopener">Microsoft Learn website</a> </p>



<h2 class="wp-block-heading" id="Serverless-vs-provisioned-–-key-differences">Serverless vs provisioned – key differences</h2>



<p>When you compare serverless and provisioned compute tiers in the Azure SQL Database, you can find a couple of key differences: </p>



<h3 class="wp-block-heading">Serverless Compute Tier:</h3>



<ul class="wp-block-list">
<li><strong>Autoscaling</strong>: Compute resources scale automatically based on workload demands. </li>



<li><strong>Automatic pausing and resuming</strong>: Databases automatically pause during inactivity, reducing costs as you are billed only for storage, and resume when activity itself resumes.</li>



<li><strong>Cost-effective</strong>: You pay only for the compute resources you use, which can lead to cost savings during periods of low or no activity, so it is ideal for workloads with intermittent and unpredictable usage patterns, such as development and testing environments.</li>
</ul>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" decoding="async" width="1296" height="613" src="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_serverless-billing-01-1296x613.png" alt="Azure SQL Database" class="wp-image-28283" title="Is a serverless Azure SQL Database cost-effective? Auto-scaling Azure SQL DB 21" srcset="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_serverless-billing-01-1296x613.png 1296w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_serverless-billing-01-300x142.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_serverless-billing-01-768x363.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_serverless-billing-01-1536x726.png 1536w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_serverless-billing-01-495x234.png 495w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_serverless-billing-01.png 1586w" sizes="auto, (max-width: 1296px) 100vw, 1296px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<h3 class="wp-block-heading">Provisioned compute tier:&nbsp;</h3>



<ul class="wp-block-list">
<li><strong>Fixed resources</strong>: Compute resources are pre-allocated and dedicated to your workload.&nbsp;</li>



<li><strong>Consistent performance</strong>: Ideal for workloads with predictable usage patterns that require consistent performance.&nbsp;</li>
</ul>



<h2 class="wp-block-heading" id="Reducing-the-cost-of-Azure-SQL-Database.-Real-world-example">Reducing the cost of the Azure SQL Database <strong>–</strong> a real-world example</h2>



<p>Let&#8217;s take a look together at the following real case from one of my Azure environments. </p>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" decoding="async" width="1296" height="613" src="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-01-1296x613.png" alt="Azure SQL Database" class="wp-image-28285" title="Is a serverless Azure SQL Database cost-effective? Auto-scaling Azure SQL DB 22" srcset="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-01-1296x613.png 1296w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-01-300x142.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-01-768x363.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-01-1536x726.png 1536w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-01-495x234.png 495w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-01.png 1586w" sizes="auto, (max-width: 1296px) 100vw, 1296px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>You can observe that during business hours when users are actively using an application, the system requires much more computing power. Another peak can be observed after midnight when the system calculates aggregates (derived data) based on transactional data provided on the last day. </p>



<p>First, I tried a serverless compute layer, which (at first glance) fits my application usage perfectly and costs me <strong>$568 monthly</strong>.&nbsp;</p>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" decoding="async" width="1296" height="612" src="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-02-1296x612.png" alt="Azure SQL Database" class="wp-image-28287" title="Is a serverless Azure SQL Database cost-effective? Auto-scaling Azure SQL DB 23" srcset="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-02-1296x612.png 1296w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-02-300x142.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-02-768x363.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-02-1536x726.png 1536w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-02-495x234.png 495w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-02.png 1587w" sizes="auto, (max-width: 1296px) 100vw, 1296px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>Then, I started to think about how to reduce the cost of the Azure SQL Database. </p>



<p>My first idea was to use the provisioned compute tier (2 vCores => <strong>$385 monthly</strong>), but it does not handle peak business hours&#8230; </p>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" decoding="async" width="1296" height="612" src="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-03-1296x612.png" alt="Azure SQL Database" class="wp-image-28289" title="Is a serverless Azure SQL Database cost-effective? Auto-scaling Azure SQL DB 24" srcset="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-03-1296x612.png 1296w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-03-300x142.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-03-768x363.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-03-1536x726.png 1536w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-03-495x234.png 495w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-03.png 1587w" sizes="auto, (max-width: 1296px) 100vw, 1296px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>The provisioned compute tier does not support auto-scaling, but if it did, what would be the cost impact? </p>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" decoding="async" width="1296" height="612" src="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-04-1296x612.png" alt="Azure SQL Database" class="wp-image-28291" title="Is a serverless Azure SQL Database cost-effective? Auto-scaling Azure SQL DB 25" srcset="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-04-1296x612.png 1296w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-04-300x142.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-04-768x363.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-04-1536x726.png 1536w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-04-495x234.png 495w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_cpu-usage-04.png 1587w" sizes="auto, (max-width: 1296px) 100vw, 1296px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>Based on this model, it will cost me <strong>$517 monthly, which comes with</strong> <strong>$51 monthly savings</strong> (almost a <strong>9% cost reduction</strong>).&nbsp;</p>



<p>I think that the introduction of a pausing mechanism allows me to save $50-60 monthly extra.&nbsp;</p>



<p>Another observation is that at a lower price you have more available computing power – the average 1.4 vCPU (serverless) vs 2.7 vCPU (provisioned). In other words, you can do more while paying less. To summarize, the provisioned compute tier or a DTU-based purchasing model may be cost-effective, but the service <strong>doesn&#8217;t support auto-scaling</strong> as some of us would expect&#8230; But using Microsoft Azure, we can set up a workflow that auto-scales an Azure SQL Database instance to the next immediate tier when a specific condition is met.  </p>



<p>For example: scale up the database as soon as it goes <strong>over 85% CPU usage</strong> for a <strong>sustained period of 5 minutes</strong> or <strong>scale down</strong> when CPU usage is  <strong>below 40%.</strong> </p>



<p>By using this tutorial you will achieve that as well. </p>



<figure class="wp-block-table"><table><tbody><tr><td><br></style><div class="promotion-box promotion-box--image-left "><div class="tiles latest-news-once"><div class="tile"><div class="tile-image"><img decoding="async" src="https://nearshore-it.eu/wp-content/uploads/2024/07/Marek-Dobkowski-1-bezloczkowy-kwadrat.jpg" alt="Marek Dobkowski 1 bezloczkowy kwadrat" title="Is a serverless Azure SQL Database cost-effective? Auto-scaling Azure SQL DB 26"></div><div class="tile-content"><p class="entry-title client-name promotion-box__headline2">Optimize your Azure SQL Database and take your project to the next level</p>
<p class="promotion-box__description2">Schedule a consultation with Marek today to streamline your database management and maximize efficiency</p>
<a class="btn btn-primary booking" href="https://outlook.office365.com/book/BookameetingwithMarek1@gfi.fr/" target="_blank" rel="noopener">Book a call</a></div></div></div></div></td></tr></tbody></table></figure>



<h2 class="wp-block-heading" id="Step-by-step-tutorial.-Automating-Azure-SQL-Database-scaling-based-on-CPU-usage">Step-by-step tutorial. Automating Azure SQL Database scaling based on CPU usage</h2>



<h3 class="wp-block-heading">Step #1: Deploy an Azure Automation Account </h3>



<p>The scaling operation will be carried out through a PowerShell runbook within an Azure Automation account. Navigate to the Azure Portal, use the search bar to find Automation, and proceed to set up a new Automation Account.&nbsp;</p>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="592" height="323" src="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_deploy-azure-automation-account-01.png" alt="Azure SQL Database" class="wp-image-28293" title="Is a serverless Azure SQL Database cost-effective? Auto-scaling Azure SQL DB 27" srcset="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_deploy-azure-automation-account-01.png 592w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_deploy-azure-automation-account-01-300x164.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_deploy-azure-automation-account-01-495x270.png 495w" sizes="auto, (max-width: 592px) 100vw, 592px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="809" height="541" src="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_deploy-azure-automation-account-02.png" alt="Azure SQL Database" class="wp-image-28295" title="Is a serverless Azure SQL Database cost-effective? Auto-scaling Azure SQL DB 28" srcset="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_deploy-azure-automation-account-02.png 809w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_deploy-azure-automation-account-02-300x201.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_deploy-azure-automation-account-02-768x514.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_deploy-azure-automation-account-02-495x331.png 495w" sizes="auto, (max-width: 809px) 100vw, 809px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<h3 class="wp-block-heading">Step #2: Deploy an Azure Automation Account </h3>



<p>With our Automation Account now in place, we are ready to proceed with the scripting process. Create a new runbook and paste the following code into it:&nbsp;</p>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" decoding="async" width="1296" height="565" src="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-process-automation-runbook-01-1296x565.png" alt="Azure SQL Database" class="wp-image-28297" title="Is a serverless Azure SQL Database cost-effective? Auto-scaling Azure SQL DB 29" srcset="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-process-automation-runbook-01-1296x565.png 1296w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-process-automation-runbook-01-300x131.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-process-automation-runbook-01-768x335.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-process-automation-runbook-01-1536x669.png 1536w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-process-automation-runbook-01-495x216.png 495w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-process-automation-runbook-01.png 1770w" sizes="auto, (max-width: 1296px) 100vw, 1296px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>The script provided utilizes the Webhook data transmitted from the alert. This data is packed with valuable insights regarding the resource that triggers the alert, enabling the script to autonomously scale any database without requiring parameters. It simply needs to be activated by an alert that employs the Common Alert Schema on an Azure SQL database.&nbsp;</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">param 

( 

    [Parameter (Mandatory = $false)] 

    [object] $WebhookData 

) 

 

#If webhook data is received from an Azure Alert, proceed to execute the workflow. 

if ($WebhookData) { 

    # Get the data object from WebhookData 

    $WebhookBody = (ConvertFrom-Json -InputObject $WebhookData.RequestBody) 

     

    # Get the info needed to identify the SQL database (depends on the payload schema) 

    $schemaId = $WebhookBody.schemaId 

 

    Write-Verbose "schemaId: $schemaId" -Verbose 

    if ($schemaId -eq "azureMonitorCommonAlertSchema") { 

        # Common Metric Alert schema 

        $essentials = [object] ($WebhookBody.data).essentials 

        Write-Output $essentials 

        # Get the first target only as this script doesn't support multiple targets 

        $alertTargetIdArray = (($essentials.alertTargetIds)[0]).Split("/") 

        $subscriptionId = ($alertTargetIdArray)[2] 

        $resourceGroupName = ($alertTargetIdArray)[4] 

        $resourceType = ($alertTargetIdArray)[6] + "/" + ($alertTargetIdArray)[7] 

        $serverName = ($alertTargetIdArray)[8] 

        $databaseName = ($alertTargetIdArray)[-1] 

        $status = $essentials.monitorCondition 

     

 

        # If alert that triggered the runbook is Activated or Fired, it means we want to autoscale the database. 

        # When the alert gets resolved, the runbook will be triggered again but because the status will be Resolved, no autoscaling will happen. 

        if (($status -eq "Activated") -or ($status -eq "Fired")) { 

            try { 

                "Logging in to Azure using managed identity for automation account..." 

                Connect-AzAccount -Identity 

            } 

            catch { 

                Write-Error -Message $_.Exception 

                throw $_.Exception 

            } 

 

            # Gets the current database details, from where we will capture the Edition and the current service objective. 

            # With this information, the below if/else will determine the next tier that the database should be scaled to. 

            # Example: if DTU database is S6, this script will scale it to S7. This ensures the script continues to scale up the DB in the case CPU keeps pegging at 100%. 

 

            $currentDatabaseDetails = Get-AzSqlDatabase -ResourceGroupName $resourceGroupName -DatabaseName $databaseName -ServerName $serverName 

            $edition = $currentDatabaseDetails.Edition 

            if ($edition -in @("Basic", "Standard", "Premium")) { 

                Write-Output "Database is DTU model." 

 

                $dtuTiers = (Get-AzSqlServerServiceObjective -Location $currentDatabaseDetails.Location) | Where-Object { $_.Enabled -eq $true -and $_.Edition -eq $edition -and $_.CapacityUnit -eq "DTU" } | Select-Object -ExpandProperty ServiceObjectiveName 

 

                $maxDtuTier = ($dtuTiers | Select-Object -Last 1) 

 

                if ($currentDatabaseDetails.CurrentServiceObjectiveName -eq $maxDtuTier) { 

                    Write-Output "DTU database is already at highest tier ($maxDtuTier). Suggestion is to move to Business Critical vCore model with 32+ vCores." 

                } 

                else { 

                    for ($i = 0; $i -lt $dtuTiers.length; $i++) { 

                        if ($dtuTiers[$i] -eq $currentDatabaseDetails.CurrentServiceObjectiveName) { 

                            $targetServiceObjectiveName = $dtuTiers[$i + 1] 

 

                            Write-Output "Scaling up database $databaseName to $targetServiceObjectiveName" 

                            Set-AzSqlDatabase -ResourceGroupName $resourceGroup -DatabaseName $databaseName -ServerName $serverName -RequestedServiceObjectiveName $targetServiceObjectiveName 

                            break 

                        } 

                    } 

                } 

            } 

            elseif ($edition -in @("GeneralPurpose", "BusinessCritical", "Hyperscale")) { 

                Write-Output "Database is vCore model." 

                 

                $vCoreTiers = (Get-AzSqlServerServiceObjective -Location westeurope) | Where-Object { $_.Enabled -eq $true -and $_.Edition -eq $edition -and $_.CapacityUnit -eq "VCores" -and $_.Family -eq $currentDatabaseDetails.Family -and $_.SkuName -eq $currentDatabaseDetails.SkuName } | Select-Object -ExpandProperty ServiceObjectiveName 

                $maxvCoreTier = ($vCoreTiers | Select-Object -Last 1) 

 

                if ($currentDatabaseDetails.CurrentServiceObjectiveName -eq $maxvCoreTier) { 

                    Write-Output "vCore database is already at highest tier ($maxvCoreTier)." 

                } 

                else { 

                    for ($i = 0; $i -lt $dtuTiers.length; $i++) { 

                        if ($vCoreTiers[$i] -eq $currentDatabaseDetails.CurrentServiceObjectiveName) { 

                            $targetServiceObjectiveName = $vCoreTiers[$i + 1] 

 

                            Write-Output "Scaling up database $databaseName to $targetServiceObjectiveName" 

                            Set-AzSqlDatabase -ResourceGroupName $resourceGroup -DatabaseName $databaseName -ServerName $serverName -RequestedServiceObjectiveName $targetServiceObjectiveName 

                            break 

                        } 

                    } 

                } 

            } 

            else { 

                Write-Error "The database edition '$edition' is not supported." 

            } 

 

             # All done, closing alert automatically 

             $alert = [object] ($WebhookBody.data).essentials.alertId 

             $pos = $alert.lastIndexOf("/") 

             $alertId = $alert.Substring($pos + 1, 36) 

             Write-Output "Closing alert $alertId" 

             Update-AzAlertState -AlertId $alertId -State "Closed" -Comment "Required action was executed automatically by autoscaleupsqldb-rb runbook. No further action requied, hence closing this alert." 

 

        } else { 

            Write-Error "The alert status - $status - is not in expected state." 

        } 

    } 

    else {         

        Write-Error "The alert data schema '$schemaId' is not supported." 

    } 

} 

else { 

    Write-Error -Message "Webhook data - $WebhookData - is in expected format." 

} 

 </pre>



<p>And one more for scaling down&#8230;&nbsp;</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">param 

( 

    [Parameter (Mandatory = $false)] 

    [object] $WebhookData 

) 

 

#If webhook data is received from an Azure Alert, proceed to execute the workflow. 

if ($WebhookData) { 

    # Get the data object from WebhookData 

    $WebhookBody = (ConvertFrom-Json -InputObject $WebhookData.RequestBody) 

     

    # Get the info needed to identify the SQL database (depends on the payload schema) 

    $schemaId = $WebhookBody.schemaId 

 

    Write-Verbose "schemaId: $schemaId" -Verbose 

    if ($schemaId -eq "azureMonitorCommonAlertSchema") { 

        # Common Metric Alert schema 

        $essentials = [object] ($WebhookBody.data).essentials 

        Write-Output $essentials 

        # Get the first target only as this script doesn't support multiple targets 

        $alertTargetIdArray = (($essentials.alertTargetIds)[0]).Split("/") 

        $resourceGroupName = ($alertTargetIdArray)[4] 

        $serverName = ($alertTargetIdArray)[8] 

        $databaseName = ($alertTargetIdArray)[-1] 

        $status = $essentials.monitorCondition 

     

 

        # If the alert that triggered the runbook is Activated or Fired, it means we want to autoscale the database. 

        # When the alert gets resolved, the runbook will be triggered again but because the status will be Resolved, no autoscaling will happen. 

        if (($status -eq "Activated") -or ($status -eq "Fired")) { 

            try { 

                "Logging in to Azure using managed identity for automation account..." 

                Connect-AzAccount -Identity 

            } 

            catch { 

                Write-Error -Message $_.Exception 

                throw $_.Exception 

            } 

 

            # Gets the current database details, from where we will capture the Edition and the current service objective. 

            # With this information, the below if/else will determine the next tier that the database should be scaled to. 

            # Example: if DTU database is S6, this script will scale it to S7. This ensures the script continues to scale up the DB in case CPU keeps pegging at 100%. 

 

            $currentDatabaseDetails = Get-AzSqlDatabase -ResourceGroupName $resourceGroupName -DatabaseName $databaseName -ServerName $serverName 

            $edition = $currentDatabaseDetails.Edition 

            if ($edition -in @("Basic", "Standard", "Premium")) { 

                Write-Output "Database is DTU model." 

 

                $dtuTiers = (Get-AzSqlServerServiceObjective -Location $currentDatabaseDetails.Location) | Where-Object { $_.Enabled -eq $true -and $_.Edition -eq $edition -and $_.CapacityUnit -eq "DTU" } | Select-Object -ExpandProperty ServiceObjectiveName 

 

                $minDtuTier = ($dtuTiers | Select-Object -First 1) 

 

                if ($currentDatabaseDetails.CurrentServiceObjectiveName -eq $minDtuTier) { 

                    Write-Output "DTU database is already at lower tier ($minDtuTier)." 

                } 

                else { 

                    for ($i = ($dtuTiers.length - 1); $i -gt 0; $i--) { 

                        if ($dtuTiers[$i] -eq $currentDatabaseDetails.CurrentServiceObjectiveName) { 

                            $targetServiceObjectiveName = $dtuTiers[$i -1] 

 

                            Write-Output "Scaling down database $databaseName to $targetServiceObjectiveName" 

                            Set-AzSqlDatabase -ResourceGroupName $resourceGroup -DatabaseName $databaseName -ServerName $serverName -RequestedServiceObjectiveName $targetServiceObjectiveName 

                            break 

                        } 

                    } 

                } 

            } 

            elseif ($edition -in @("GeneralPurpose", "BusinessCritical", "Hyperscale")) { 

                Write-Output "Database is vCore model." 

                 

                $vCoreTiers = (Get-AzSqlServerServiceObjective -Location westeurope) | Where-Object { $_.Enabled -eq $true -and $_.Edition -eq $edition -and $_.CapacityUnit -eq "VCores" -and $_.Family -eq $currentDatabaseDetails.Family -and $_.SkuName -eq $currentDatabaseDetails.SkuName } | Select-Object -ExpandProperty ServiceObjectiveName 

                $minvCoreTier = ($vCoreTiers | Select-Object -First 1) 

 

                if ($currentDatabaseDetails.CurrentServiceObjectiveName -eq $minvCoreTier) { 

                    Write-Output "vCore database is already at lower tier ($minvCoreTier)." 

                } 

                else { 

                    for ($i = ($dtuTiers.length - 1); $i -gt 0; $i--) { 

                        if ($vCoreTiers[$i] -eq $currentDatabaseDetails.CurrentServiceObjectiveName) { 

                            $targetServiceObjectiveName = $vCoreTiers[$i - 1] 

 

                            Write-Output "Scaling down database $databaseName to $targetServiceObjectiveName" 

                            Set-AzSqlDatabase -ResourceGroupName $resourceGroup -DatabaseName $databaseName -ServerName $serverName -RequestedServiceObjectiveName $targetServiceObjectiveName 

                            break 

                        } 

                    } 

                } 

            } 

            else { 

                Write-Error "The database edition '$edition' is not supported." 

            } 

 

             # All done, closing alert automatically 

             $alert = [object] ($WebhookBody.data).essentials.alertId 

             $pos = $alert.lastIndexOf("/") 

             $alertId = $alert.Substring($pos + 1, 36) 

             Write-Output "Closing alert $alertId" 

             Update-AzAlertState -AlertId $alertId -State "Closed" -Comment "Required action was executed automatically by autoscaleupsqldb-rb runbook. No further action requied, hence closing this alert." 

 

        } else { 

            Write-Error "The alert status - $status - is not in expected state." 

        } 

    } 

    else {         

        Write-Error "The alert data schema '$schemaId' is not supported." 

    } 

} 

else { 

    Write-Error -Message "Webhook data - $WebhookData - is in expected format." 

} 

 </pre>



<h3 class="wp-block-heading">Step #3: Trigger Automation runbook(s) via the Azure Monitor Alert </h3>



<p>Create a new alert rule on your Azure SQL Database:&nbsp;</p>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" decoding="async" width="1296" height="708" src="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-alert-01-1296x708.png" alt="Azure SQL Database" class="wp-image-28299" title="Is a serverless Azure SQL Database cost-effective? Auto-scaling Azure SQL DB 30" srcset="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-alert-01-1296x708.png 1296w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-alert-01-300x164.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-alert-01-768x420.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-alert-01-1536x840.png 1536w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-alert-01-495x271.png 495w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-alert-01.png 1568w" sizes="auto, (max-width: 1296px) 100vw, 1296px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>The next step will require several different setups:&nbsp;</p>



<ol class="wp-block-list">
<li><strong>Scope of the alert</strong>: this will be auto-populated if the New Alert Rule is clicked from within the database itself.&nbsp;</li>



<li><strong>Condition</strong>: when should the alert get triggered by selecting a signal and defining its logic?</li>



<li><strong>Actions</strong>: when the alert gets triggered, what will happen?&nbsp;</li>



<li><strong>Details</strong>: location, name, and other configuration values.</li>
</ol>



<p><strong>Condition</strong>&nbsp;</p>



<p>For this example, the alert will monitor the CPU consumption every 1 minute from the last 5 minutes. When the average goes over 85%, the alert will be triggered: </p>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" decoding="async" width="1296" height="631" src="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-alert-02-1296x631.png" alt="Azure SQL Database" class="wp-image-28301" title="Is a serverless Azure SQL Database cost-effective? Auto-scaling Azure SQL DB 31" srcset="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-alert-02-1296x631.png 1296w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-alert-02-300x146.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-alert-02-768x374.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-alert-02-1536x748.png 1536w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-alert-02-495x241.png 495w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-alert-02.png 1668w" sizes="auto, (max-width: 1296px) 100vw, 1296px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p><strong>Actions</strong>&nbsp;</p>



<p>After the signal logic is created, we need to indicate what the alert is to do when it gets fired. We will do this with an action group. When creating a new <strong>action group</strong>, two tabs will help you configure triggering the runbook. </p>



<p>Basics:&nbsp;</p>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="1080" height="621" src="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-action-group-01.png" alt="Azure SQL Database" class="wp-image-28303" title="Is a serverless Azure SQL Database cost-effective? Auto-scaling Azure SQL DB 32" srcset="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-action-group-01.png 1080w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-action-group-01-300x173.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-action-group-01-768x442.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-action-group-01-495x285.png 495w" sizes="auto, (max-width: 1080px) 100vw, 1080px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>Actions: </p>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" decoding="async" width="1296" height="457" src="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-action-group-02-1296x457.png" alt="Azure SQL Database" class="wp-image-28305" title="Is a serverless Azure SQL Database cost-effective? Auto-scaling Azure SQL DB 33" srcset="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-action-group-02-1296x457.png 1296w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-action-group-02-300x106.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-action-group-02-768x271.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-action-group-02-1536x541.png 1536w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-action-group-02-495x174.png 495w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-action-group-02.png 1895w" sizes="auto, (max-width: 1296px) 100vw, 1296px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>After saving the action group, add the remaining details to the alert.</p>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="1083" height="632" src="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-alert-03.png" alt="Azure SQL Database" class="wp-image-28307" title="Is a serverless Azure SQL Database cost-effective? Auto-scaling Azure SQL DB 34" srcset="https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-alert-03.png 1083w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-alert-03-300x175.png 300w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-alert-03-768x448.png 768w, https://nearshore-it.eu/wp-content/uploads/2024/07/nearshore_2024.07.25_create-alert-03-495x289.png 495w" sizes="auto, (max-width: 1083px) 100vw, 1083px" /></figure></div>


<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<p>Create another alert and action for scaling down with the condition: <strong>CPU percentage is less than 40%</strong>. </p>



<p><strong>That&#8217;s it!</strong> The alerts are now enabled and will auto-scale the database when fired. The runbook will be executed twice per alert: once when fired and another when resolved, but it will only perform a scale operation when fired.</p>



<h2 class="wp-block-heading">Summary&nbsp;</h2>



<p>The serverless Azure SQL Database offers a potentially cost-effective solution for database management, balancing performance and scalability with features like automatic pause and resume, high availability, and robust security. Key benefits include cost savings of $50-$60 per month from automatic pausing and flexibility in resource usage. However, it may not be ideal for applications needing instant access or consistent high performance due to potential delays and slight latencies during scaling.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://nearshore-it.eu/articles/auto-scaling-azure-sql-database/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>VMware to Azure migration – the ultimate step-by-step guide. Migrate VMware VMs smoothly</title>
		<link>https://nearshore-it.eu/articles/migrating-vmware-to-azure/</link>
					<comments>https://nearshore-it.eu/articles/migrating-vmware-to-azure/#respond</comments>
		
		<dc:creator><![CDATA[-- Nie pokazuj autora --]]></dc:creator>
		<pubDate>Mon, 03 Jun 2024 09:43:11 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[Technologies]]></category>
		<category><![CDATA[Cloud engineering]]></category>
		<guid isPermaLink="false">https://nearshore-it.eu/?p=27530</guid>

					<description><![CDATA[The seamless migration of VMware VMS to Microsoft Azure is at your fingertips. Follow our guide and migrate VMware with no issues! ]]></description>
										<content:encoded><![CDATA[
<p>VMware is a great tool for virtualization, the management of virtual machines, and the optimization of resources, and has gained a considerable number of enthusiasts over the years. However, recent changes in licensing models in 2023 have made many users of this optimization tool think more carefully about cost optimization. One option is to migrate VMware VMS to another solution, like Azure Cloud. We assume that the selection stage is behind you. The decision has been made: your company is switching to Azure. In this guide, we&#8217;ll walk you through the necessary steps and outline Microsoft learning resources.</p>



<h2 class="wp-block-heading">Interesting statistics behind &#8220;Why migrate from VMware to Azure&#8221;  </h2>



<ul class="wp-block-list">
<li>Nearly <strong>50 percent </strong>of IT professionals consider cost the most significant factor in looking for an alternative solution, according to a survey by <a href="https://www.gartner.com/en/documents/5330263" target="_blank" rel="noreferrer noopener">Gartner</a>.&nbsp;&nbsp;</li>



<li>In November 2023, VMmare was acquired by Broadcom. The change in ownership came with <strong>new licensing models. </strong>Under some circumstances, a subscription model that replaces lifetime licenses will be beneficial, but may also involve an increase in costs in some cases.&nbsp;&nbsp;</li>
</ul>



<div class="table-of-contents">
    <p class="title"></p>
    <ol>
                    <li><a href="#VMware-on-premises-–-when-is-it-better-to-stay?">1.  VMware on-premises – when is it better to stay?  </a></li>
                    <li><a href="#So-why-do-companies-migrate-VMS-to-Azure?">2.  So why do companies migrate VMS to Azure?  </a></li>
                    <li><a href="#how-to-prepare-for-the-migration-process-in-4-simple-steps">3.  Migrate VMware – how to prepare for the migration process in 4 simple steps </a></li>
                    <li><a href="#Assessing-your-current-VMware-environment">4.  Assessing your current VMware environment </a></li>
                    <li><a href="#Evaluating-Azure-subscription-options">5.  Evaluating Azure subscription options </a></li>
                    <li><a href="#Setting-up-your-Azure-environment">6.  Setting up your Azure environment </a></li>
                    <li><a href="#Creating-Azure-VM-instances">7.  Creating Azure VM instances </a></li>
                    <li><a href="#Azure-Migrate-project-–-executing-the-migration">8.  Azure Migrate project – executing the migration </a></li>
                    <li><a href="#Run-a-test-migration-to-ensure-data-integrity">9.  Run a test migration to ensure data integrity </a></li>
                    <li><a href="#VMware-migration-to-Azure-–-post-migration-optimization">10.  VMware migration to Azure – post-migration optimization </a></li>
                    <li><a href="#Summary-–-your-migration-to-the-Azure-VMWare-solution">11.  Summary – your migration to the Azure VMWare solution </a></li>
            </ol>
</div>


<h2 class="wp-block-heading" id="VMware-on-premises-–-when-is-it-better-to-stay?">VMware on-premises – when is it better to stay?</h2>



<p>The new subscription model may prove to be a better choice in some situations for certain organizations. For example, companies that don&#8217;t have technical support will get it bundled with the subscription. This model also represents a more flexible way to adjust the license cost to the real-world usage of resources.&nbsp;&nbsp;</p>



<h2 class="wp-block-heading" id="So-why-do-companies-migrate-VMS-to-Azure?">So why do companies migrate VMS to Azure?</h2>



<p>Despite the potential benefits, many organizations decide to migrate VMware to the Azure cloud &#8220;as is&#8221; to gain greater cost efficiency. This way, they do not need to incur the cost of on-premises infrastructure and gain more flexibility.&nbsp;&nbsp;</p>



<p>Azure VMware Solution (AVS) allows them to combine the potential of VMware capabilities with the potential of the cloud: business continuity and perfect scaling. Furthermore, no refactoring is needed, licenses are included, and consistency with on-premises workloads is assured.&nbsp;</p>



<h2 class="wp-block-heading" id="how-to-prepare-for-the-migration-process-in-4-simple-steps">Migrate VMware – how to prepare for the migration process in 4 simple steps</h2>



<ol class="wp-block-list" start="1">
<li><strong>Plan</strong> –<strong> </strong>before migrating from on-premises VMware to Azure, you need to carefully think over and plan the transition. You can handle it in-house or, in case any issues arise, take advantage of the opportunity for an advisory call with cloud experts.&nbsp;</li>
</ol>



<ol class="wp-block-list" start="2">
<li><strong>Collaborate </strong>– you have a transition process ahead of you. From now on, cooperation and good communication with all stakeholders involved will work in your favor. The use of project management tools for this purpose is standard today, especially in Agile projects.&nbsp;</li>
</ol>



<ol class="wp-block-list" start="3">
<li><strong>Assess </strong>– evaluate the existing environment to identify dependencies, applications, and resources that require migration. A proper automation tool like Azure Migrate will come in handy here. There are many such tools available on the market. In our experience, Microsoft solutions provide everything you need out of the box.&nbsp;</li>
</ol>



<ol class="wp-block-list" start="4">
<li><strong>Set up </strong>–<strong> </strong>connect to Azure and ensure all necessary permissions and access rights are in place. You need to set up accounts and credentials in the<strong> </strong>Azure portal to walk through migration smoothly.&nbsp;&nbsp;</li>
</ol>



<h2 class="wp-block-heading" id="Assessing-your-current-VMware-environment">Assessing your current VMware environment</h2>



<p>When starting your migration process to Azure, it is essential to evaluate your current on-premises workloads to determine their cloud readiness, identify any risks that may arise, and estimate the associated costs and level of complexity. Below is how you do it.&nbsp;</p>



<p>Essential steps before you start:&nbsp;</p>



<ul class="wp-block-list">
<li><a href="https://azure.microsoft.com/pricing/free-trial/" target="_blank" rel="noreferrer noopener">Set up a free Azure account</a> if you don&#8217;t have one yet.&nbsp;&nbsp;</li>



<li>Ensure you have<a href="https://learn.microsoft.com/en-us/azure/migrate/vmware/tutorial-discover-vmware" target="_blank" rel="noreferrer noopener"> discovered servers </a>to assess.&nbsp;&nbsp;</li>
</ul>



<h3 class="wp-block-heading">1) Choose the assessment method&nbsp;</h3>



<p>Assess based either on server configuration data/metadata or on the collection of dynamic performance data.&nbsp;</p>



<h3 class="wp-block-heading">2) Assess and Migrate using the Azure Migration Tool&nbsp;</h3>



<p>This can be done in the <strong>Discovery and Assessment tab, </strong>by<strong> </strong>selecting <strong>Assess</strong> and then going to <strong>Azure VM</strong>. You can use the <a href="https://learn.microsoft.com/en-us/azure/migrate/vmware/tutorial-assess-vmware-azure-vm" target="_blank" rel="noreferrer noopener">Azure tutorial </a>for a detailed overview of all the configurations needed at this step.&nbsp;&nbsp;</p>



<h3 class="wp-block-heading">3) Review your assessment&nbsp;&nbsp;</h3>



<p>As a result, you will obtain recommendations regarding Azure readiness. These include:&nbsp;</p>



<ul class="wp-block-list">
<li>Ready for Azure,&nbsp;&nbsp;</li>



<li>Ready with conditions,&nbsp;</li>



<li>Not ready for Azure,&nbsp;</li>



<li>Readiness unknown (in case of data availability problems).&nbsp;</li>
</ul>



<p>You will also get the estimated monthly costs and associated storage expenses. Remember, you can make use of Azure Cost Management tools when migrating workloads to Azure.&nbsp;</p>



<h2 class="wp-block-heading" id="Evaluating-Azure-subscription-options">Evaluating Azure subscription options</h2>



<p>Select the Azure subscription option that best fits your organization&#8217;s needs by evaluating cost, scalability, and the features you need.&nbsp;</p>



<p>Before deploying the Azure subscription, consider familiarizing yourself with <strong>the saving options Azure offers. </strong>The great thing is that the Azure assessment you performed, in addition to providing relevant information on your cloud readiness, storage requirements, and recommended cloud region, will also suggest the best cost optimization options (that is, <strong>Azure Reservations, Azure Saving Plans, or Azure Hybrid Benefits </strong>for saving on computing costs).&nbsp;&nbsp;</p>



<h2 class="wp-block-heading" id="Setting-up-your-Azure-environment">Setting up your Azure environment</h2>



<p>Setting up your Azure environment involves selecting the right tools and resources. This will allow you to streamline the process of moving your data and applications to the Azure environment, saving time and reducing potential errors. Once done, the next step is to set up an Azure environment for your organization. We recommend using the Azure Migrate native tool. &nbsp;<br>&nbsp;<br>The steps include:&nbsp;</p>



<h3 class="wp-block-heading">1) Deployment of the Azure Migration appliance&nbsp;</h3>



<p>You<strong> </strong>will need to create an appliance server and follow the configuration steps detailed in the <a href="https://learn.microsoft.com/en-us/azure/migrate/vmware/how-to-set-up-appliance-vmware" target="_blank" rel="noreferrer noopener">Azure tutoria</a>l.&nbsp;</p>



<h3 class="wp-block-heading">2) Start continuous discovery&nbsp;</h3>



<p>To access the servers&#8217; configuration and performance data, your appliance must connect with the vCenter Server. Continuous discovery allows you to catalog software, analyze dependencies, and locate SQL Server instances and databases.&nbsp;&nbsp;</p>



<h2 class="wp-block-heading" id="Creating-Azure-VM-instances">Creating Azure VM instances</h2>



<p>Creating Azure VM instances can be done by replicating Azure VMs to a secondary location or transferring virtual machines from VMware to Azure. The process involves specifying the Virtual Machine name and selecting the appropriate settings for the migration.&nbsp;&nbsp;</p>



<h2 class="wp-block-heading" id="Azure-Migrate-project-–-executing-the-migration">Azure Migrate project – executing the migration</h2>



<p>Performing an Azure Migrate server assessment is necessary to understand the current environment and define the migration settings. Utilizing Azure Migration and Modernization tools can simplify and streamline the process. Choose the right migration method (agentless or agent-based as described in the Microsoft resources), considering, e.g.:&nbsp;&nbsp;</p>



<ul class="wp-block-list">
<li>Permissions needed to run the migration,&nbsp;</li>



<li>Number of VM migrations to be performed at once,&nbsp;</li>



<li>Disk limits – selection of storage account/number of disks in Azure,&nbsp;</li>



<li>Passthrough disks to be used as a storage source,&nbsp;</li>



<li>Deployment steps that will depend on the method.&nbsp;</li>
</ul>



<h2 class="wp-block-heading" id="Run-a-test-migration-to-ensure-data-integrity">Run a test migration to ensure data integrity</h2>



<p>Data integrity refers to the accuracy, completeness, and consistency of data. According to<a href="https://www.gartner.com/en" target="_blank" rel="noreferrer noopener"> a Gartner</a> study, due to <a href="https://nearshore-it.eu/articles/what-is-data-quality/" target="_blank" rel="noreferrer noopener">data quality</a> problems, <strong>80% of data migrations won&#8217;t meet their business objectives, </strong>and the cost of bad data annually goes into the <strong>millions of dollars. </strong>That&#8217;s why migration tests are conducted when migrating databases. VMWare&#8217;s move to the cloud is no different.&nbsp;&nbsp;</p>



<p>Run a test migration to ensure data integrity before conducting a full migration process. This involves moving a subset of data to a test environment to check for any errors or missing information. Then, a clean-up test migration should be performed to remove any redundant data. Otherwise, you may incur extra expenses in Azure. The data integrity check is carried out in two steps:&nbsp;</p>



<h3 class="wp-block-heading">1) Validating replication&nbsp;</h3>



<p>Checking if every sector changed in the source disk was replicated to the target one.&nbsp;&nbsp;</p>



<h3 class="wp-block-heading">2) Ensuring data consistency&nbsp;&nbsp;</h3>



<p>Ensuring that the data sent to the Azure disks matches the data copied from the source disks.&nbsp;&nbsp;</p>



<p>Also read:<a href="/Users/beba/Downloads/In+November+2023,+VMmare+was+acquired+by+Broadcom.++The+change+in+ownership+came+with+widely+criticized+price+changes.+Instead+of+a+lifetime+license,+as+before,+VMware+is+now+available+on+a+subscription+model.+The+price+increase+is+significant,+reaching+up+to+600+percent.+++Nearly+50+percent+of+IT+professionals+consider+cost+as+the+most+significant+factor+making+them+thinki+about+looking+for+an+alternative+solution,+according+to+survey+by+Gartner+(The+CIO's+Guide+to+Broadcom's+Acquisition+of+VMware).+One+of+the+alternatives+mentioned+is+Microsoft's+Azure+platform." target="_blank" rel="noreferrer noopener"> Azure cost optimization</a>&nbsp;</p>



<h2 class="wp-block-heading" id="VMware-migration-to-Azure-–-post-migration-optimization">VMware migration to Azure – post-migration optimization</h2>



<p>Your journey doesn&#8217;t end with the cloud migration itself. Post-migration optimization is crucial<strong>. </strong>Ensuring proper Backup and Recovery solutions and using Managed Services will be the steps you take to ensure business continuity.&nbsp;&nbsp;</p>



<h3 class="wp-block-heading">Implementing Azure Backup and Disaster Recovery solutions&nbsp;</h3>



<p>Choose a backup solution for the VMware Virtual Machines, such as <a href="https://docs.microsoft.com/azure/backup/backup-azure-backup-server-vmware?context=/azure/azure-vmware/context/context" target="_blank" rel="noreferrer noopener">Microsoft Azure Backup Server (MABS).&nbsp;</a> For optimal results, consider using Azure Managed Services. Such services are great whenever you need to enhance efficiency.&nbsp;&nbsp;</p>



<h2 class="wp-block-heading" id="Summary-–-your-migration-to-the-Azure-VMWare-solution">Summary – your migration to the Azure VMWare solution</h2>



<p>In a recent webinar, an Inetum expert emphasized why is it worth considering moving on-premises VMware VSphere to Azure. Built-in tools offered by Microsoft allow for accurate estimation of the costs involved in the migration process. Making use of Azure cost calculations provided by the platform is also important. Additional features such as Azure security solutions and the general scalability of Azure Cloud give many companies great options to keep their existing virtualization features in a modern environment. </p>



</style><div class="promotion-box promotion-box--image-left "><div class="tiles latest-news-once"><div class="tile"><div class="tile-image"><img decoding="async" src="https://nearshore-it.eu/wp-content/uploads/2024/06/nearshore_2024.05.27_cover.jpg" alt="nearshore 2024.05.27 cover" title="VMware to Azure migration – the ultimate step-by-step guide. Migrate VMware VMs smoothly 35"></div><div class="tile-content"><p class="entry-title client-name">Migrate on-premises VMware VMS to Azure with us!</p>
Do you find transferring your VMware to Azure using an Azure migration tool a challenge?  
Contact us to make your transition seamless! <br /><br />
<a class="btn btn-primary" href="https://outlook.office365.com/book/BookameetingwithMarek1@gfi.fr/" target="_blank" rel="noopener">Contact us!</a></div></div></div></div>
]]></content:encoded>
					
					<wfw:commentRss>https://nearshore-it.eu/articles/migrating-vmware-to-azure/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Elevate Your Cloud&#8217;s Efficiency with Azure Cost Optimization best practice: Zero-Cost Techniques for Peak Performance</title>
		<link>https://nearshore-it.eu/articles/azure-cost-optimization-best-practices/</link>
					<comments>https://nearshore-it.eu/articles/azure-cost-optimization-best-practices/#respond</comments>
		
		<dc:creator><![CDATA[-- Nie pokazuj autora --]]></dc:creator>
		<pubDate>Wed, 06 Mar 2024 05:27:00 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[Best practices]]></category>
		<category><![CDATA[Cloud engineering]]></category>
		<guid isPermaLink="false">https://nearshore-it.eu/?p=27032</guid>

					<description><![CDATA[Read the article and learn advanced techniques and best practices for Azure cost optimization, providing you with the knowledge and tools you need to optimize cloud costs and achieve maximum efficiency.]]></description>
										<content:encoded><![CDATA[
<p>Cloud computing has revolutionized the way businesses operate, providing them with the flexibility and scalability they need to grow and succeed. However, with the benefits of cloud computing come the challenges of managing costs. Explore how to elevate your cloud&#8217;s efficiency with Azure cost optimization, focusing on zero-cost techniques that can be implemented for peak performance.</p>



<div class="table-of-contents">
    <p class="title"></p>
    <ol>
                    <li><a href="#Azure-Cost-Management">1.  Understanding Azure Cost Management</a></li>
                    <li><a href="#Zero-Cost-Azure-Cost-Optimization-Best-Practice">2.  Zero-Cost Azure Cost Optimization Best Practice</a></li>
                    <li><a href="#Azure-Cloud-Optimization-trends">3.  Take Advantage of Azure Cloud Optimization Trend</a></li>
                    <li><a href="#The-Development-of-Azure-policy">4.  Future Perspective: The Development of Azure Policy</a></li>
                    <li><a href="#Tips-and-Best-Practices-to-Optimize-your-Azure">5.  To sum up&#8230;</a></li>
            </ol>
</div>


<h2 class="wp-block-heading" id="Azure-Cost-Management">Understanding Azure Cost Management</h2>



<p>Cost optimization on the Azure platform involves using tools and strategies that allow you to minimize the costs associated with using the cloud.&nbsp;</p>



<p>The importance of cost optimization in Azure is a key element in the effective and efficient use of cloud computing. Cost optimization in Azure is not just about minimizing expenses; it has a significant impact on overall operational efficiency.&nbsp;</p>



<p>One of the most important aspects of cost optimization in Azure is monitoring resource utilization. This allows you to identify unnecessary or unused resources and turn them off, saving costs. In addition, cost optimization on the Azure platform also includes the use of process automation tools, which allows you to reduce the costs associated with infrastructure management.&nbsp;</p>



<p><strong>Read also:&nbsp;</strong><a href="https://nearshore-it.eu/articles/technologies/azure-cost-management-101-how-to-optimize-cloud-costs/"> Azure Cost Management 101. How to optimize cloud costs?</a></p>



<h2 class="wp-block-heading" id="Zero-Cost-Azure-Cost-Optimization-Best-Practice">Zero-Cost Azure Cost Optimization Best Practice&nbsp;</h2>



<p>Cost optimization is becoming an integral part of the cloud strategy, enabling organizations to achieve the highest performance with minimal financial outlays. To achieve this, it is crucial to remember about practice that do not generate additional costs.&nbsp;</p>



<h3 class="wp-block-heading">Use of native Azure tools&nbsp;</h3>



<p>Azure provides a comprehensive suite of tools specifically crafted to streamline the monitoring and optimization of costs within the cloud environment. One notable tool is <strong><a href="https://nearshore-it.eu/articles/technologies/10-azure-cost-management-tools-to-optimize-your-budget/">Azure Cost Management</a> and Billing</strong>, a centralized platform that empowers users to meticulously track their expenditures. Through this tool, organizations can gain insights into resource consumption patterns, identify cost drivers, and make informed decisions on where cost optimizations can be implemented.&nbsp;</p>



<p>Furthermore, <strong>Azure Advisor</strong> plays a crucial role by offering personalized recommendations for resource management. This intelligent advisory service leverages Machine Learning algorithms to analyse usage patterns and suggests optimizations tailored to the specific needs of the organization. These recommendations span diverse aspects, ranging from right-sizing virtual machines to identifying underutilized resources, providing actionable insights for efficient cost management.&nbsp;</p>



<h3 class="wp-block-heading">Virtual machine management&nbsp;</h3>



<p>The dynamic scaling of resources based on real-time demand is a cornerstone of efficient cost optimization. Leveraging the <strong>auto-scaling feature</strong> within Azure allows organizations to seamlessly adjust the number of virtual machines in response to fluctuating workloads. During periods of increased demand, the system can automatically provision additional resources, ensuring optimal performance without incurring unnecessary costs during periods of reduced activity.&nbsp;</p>



<p>The auto-scaling mechanism can be finely tuned to respond to predefined metrics such as CPU utilization or network traffic, providing a flexible and responsive infrastructure. This not only enhances the overall efficiency of resource utilization but also aligns with the principles of pay-as-you-go, where costs are incurred only when resources are actively needed.&nbsp;</p>



<h3 class="wp-block-heading">Data warehouse optimization&nbsp;</h3>



<p>Strategic management of data warehousing significantly influences overall costs in the cloud. Adopting practices such as <strong>data partitioning and compression</strong> becomes instrumental in curbing expenses related to storage. Data partitioning involves dividing large datasets into smaller, more manageable segments, allowing for more efficient storage and retrieval. Compression, on the other hand, reduces the physical storage space required for data, leading to direct cost savings in terms of cloud storage.&nbsp;</p>



<p>Azure provides tools and features that enable organizations to implement these optimization strategies seamlessly. By incorporating these practices, businesses not only economize on storage costs but also enhance the performance of data-intensive workloads, contributing to an overall more cost-effective cloud deployment.&nbsp;</p>



<h3 class="wp-block-heading">Network traffic management&nbsp;</h3>



<p>Effective control of data transfer costs is paramount in the pursuit of cloud cost optimization. <strong>Conscious management of network traffic</strong> emerges as a crucial strategy in achieving this objective. Organizations can strategically minimize data transfer between Azure regions by strategically placing resources or utilizing region-specific services. Additionally, the adoption of <strong>Content Delivery Network (CDN) services</strong> aids in optimizing the delivery of content by strategically caching data at edge locations, reducing latency, and mitigating unnecessary data transfer costs.&nbsp;</p>



<p>By implementing robust network traffic management practices, organizations can mitigate the impact of data transfer costs on their overall cloud expenditure. This involves a strategic balance between optimizing the performance of applications and minimizing unnecessary data movement, ultimately contributing to a more efficient and cost-effective cloud infrastructure.</p>



<h2 class="wp-block-heading" id="Azure-Cloud-Optimization-trends">Take Advantage of Azure Cloud Optimization trends&nbsp;</h2>



<p>Trends in Azure cost optimization are increasingly focused on both strategic resource management and the utilization of specific Azure services. Organizations can significantly reduce their Azure spend without compromising performance by:&nbsp;</p>



<ul class="wp-block-list">
<li><strong>Strategic Resource Management</strong>: This trend involves the careful allocation and management of resources to ensure they are used efficiently. By strategically managing resources, organizations can avoid unnecessary Azure costs.&nbsp;</li>



<li><strong>Utilization of Specific Azure Services</strong>: Services like Azure Hybrid Benefit and Azure Reservations are being increasingly used for cost optimization. Azure Hybrid Benefit allows organizations to use their on-premises Windows Server and SQL Server licenses in Azure, leading to significant cost savings. Azure Reservations, on the other hand, provide discounted prices for one- or three-year commitments on various Azure services.&nbsp;</li>



<li><strong>Regular Cost Analysis</strong>: Regularly analysing Azure costs can help identify areas where spending can be reduced without compromising performance. Tools like Azure Cost Management are being used to monitor, allocate, and optimize cloud costs. These tools provide insights into where organizations are accruing costs and offer suggestions for reducing unnecessary spending.&nbsp;</li>



<li><strong>Use of Azure Spot VMs and Reserved Instances</strong>: Azure Spot VMs offer unused Azure capacity at a significant discount, while reserved instances allow organizations to commit to Azure resources over a one- or three-year period in exchange for a discount. Both of these strategies help optimize the Azure budget.&nbsp;</li>



<li><strong>Leveraging Azure Cost Optimization Strategies</strong>: Using server licenses with Azure and Azure Kubernetes Service can help organizations find the most cost-effective approach for their specific workloads.&nbsp;</li>



<li><strong>Trend of Containerization</strong>: The use of container technologies like Docker and Kubernetes allows for efficient management and deployment of applications in the cloud. This not only improves performance but also contributes to cost reduction.&nbsp;</li>



<li><strong>Security as a Key Element of Cost Optimization</strong>: Preventing security breaches is more cost-effective than addressing them after they occur. Azure offers a range of tools and services to protect data and applications in the cloud, which can significantly aid in cost optimization efforts.&nbsp;</li>
</ul>



<h2 class="wp-block-heading" id="The-Development-of-Azure-policy">Future Perspective: The Development of Azure policy&nbsp;</h2>



<p>The future of cost optimization in the cloud is promising and is expected to develop significantly in the coming. This is because as many as <a href="https://www.pluralsight.com/resource-center/state-of-cloud-2023" target="_blank" rel="noreferrer noopener">94% of organizations are already in the cloud, but 69% of them don&#8217;t have a defined cloud strategy</a>. It includes the development of advanced tools and strategies to help companies manage and reduce cloud expenses. One such tool is Azure Policy, which helps organizations enforce and maintain compliance with corporate standards and service level agreements.&nbsp;&nbsp;</p>



<p>These policies can play a key role in controlling and minimizing cloud costs by setting guidelines and automatically implementing cost-saving measures. Additionally, the development of more sophisticated automation tools will further improve the process of resource allocation and cost control. Technologies such as Machine Learning and Artificial Intelligence solutions can provide companies with insights and recommendations for optimizing cloud spend.&nbsp;</p>



<figure class="wp-block-table"><table><tbody><tr><td><br></style><div class="promotion-box promotion-box--image-left "><div class="tiles latest-news-once"><div class="tile"><div class="tile-image"><img decoding="async" src="https://nearshore-it.eu/wp-content/uploads/2021/11/nearshore_2021.11.25_cover.jpg" alt="nearshore 2021.11.25 cover" title="Elevate Your Cloud&#039;s Efficiency with Azure Cost Optimization best practice: Zero-Cost Techniques for Peak Performance 36"></div><div class="tile-content"><p class="entry-title client-name">CLOUD ENGINEERING</p>

<h3>Make the best use of the cloud!</h3>
Enter the digital world taking advantage of our cloud competences!
<a class="btn btn-primary" href="https://nearshore-it.eu/cloud-engineering/" target="_blank" rel="noopener">Get started now!</a>



</div></div></div></div></td></tr></tbody></table></figure>



<h2 class="wp-block-heading" id="Tips-and-Best-Practices-to-Optimize-your-Azure">Tips and Best Practices to Optimize your Azure &#8211; Summary &nbsp;</h2>



<p>Utilizing cost optimization techniques, such as resource management, use of monitoring and analysis tools, as well as understanding and optimizing pricing models, can significantly enhance cloud performance while simultaneously reducing costs.&nbsp;</p>



<p>Inetum, as an experienced specialist in the implementation and cost optimization of cloud solutions, can assist in fully exploiting the potential of Microsoft Azure. Thanks to our experience and knowledge, we are able to provide tailored solutions that will help achieve optimal cloud performance at minimal costs.&nbsp;</p>



<p>Remember, cost optimization is a continuous process that requires regular evaluation and adjustment. Therefore, using the services of experts such as Inetum can bring significant benefits to your organization.&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://nearshore-it.eu/articles/azure-cost-optimization-best-practices/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Level Up Your Cost Management with Microsoft Azure Savings Plans for Compute!</title>
		<link>https://nearshore-it.eu/articles/cost-management-with-microsoft-azure-savings-plans/</link>
					<comments>https://nearshore-it.eu/articles/cost-management-with-microsoft-azure-savings-plans/#respond</comments>
		
		<dc:creator><![CDATA[-- Nie pokazuj autora --]]></dc:creator>
		<pubDate>Thu, 22 Feb 2024 11:51:02 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[Best practices]]></category>
		<category><![CDATA[Cloud engineering]]></category>
		<guid isPermaLink="false">https://nearshore-it.eu/?p=26958</guid>

					<description><![CDATA[Get to know smart cloud Azure savings plans and optimize your cloud spending in 2024!]]></description>
										<content:encoded><![CDATA[
<p>Savings &#8211; all infrastructure managers pursue them and so many still fail to avoid budget slip-ups. Cloud is no different. <strong>In 2024, public cloud spending is forecasted to grow by 20% annually, reaching nearly $680 billion.</strong> Proper allocation of cloud resources and infrastructure scaling will be especially important for companies implementing Generative AI solutions &#8211; Gartner analysts alert.</p>



<p>&#8220;<em>The success of implementing GenAI solutions is influenced by several factors, including the competencies of the implementation team, the tools and licensing forms used, appropriate business processes, and cost-effectiveness. The costs of such an investment can be optimized, for instance, by using properly selected algorithms that require less computing power and, consequently, generate fewer costs.</em> </p>



<p><em>Conducting an AI Maturity Assessment is beneficial to achieve optimal conditions in these areas. As part of this assessment, an experienced team will evaluate the organization&#8217;s readiness to implement GenAI solutions and provide recommendations for future work.</em>&#8221; –&nbsp;adds Marek Czachorowski, Practice Leader of Cloud Engineering at Inetum.</p>



<p>So how to save money on the cloud and scale it all right? One way is to take advantage of solutions from public cloud providers. In this guide, we examine solutions of the leader in public cloud services, that is Azure Saving Plans offer. Find out what they are, how they differ from Azure Reserved Instances, and how to purchase them and use them most effectively.</p>



<div class="table-of-contents">
    <p class="title"></p>
    <ol>
                    <li><a href="#How-to-Buy-and-Manage-Azure-Savings-Plans?">1.  How to Buy and Manage Azure Savings Plans?</a></li>
                    <li><a href="#Azure-Savings-Plans-Benefits-Explained">2.  Azure Savings Plans Benefits Explained</a></li>
                    <li><a href="#Getting-the-most-out-of-Azure-Saving-Plans">3.  Getting the most out of Azure Saving Plans</a></li>
                    <li><a href="#Utilizing-Azure-Hybrid-Benefit-with-Savings-Plans">4.  Utilizing Azure Hybrid Benefit with Savings Plans</a></li>
                    <li><a href="#Purchasing-Azure-Savings-plan---summary">5.  Purchasing Azure Savings plan &#8211; summary</a></li>
            </ol>
</div>


<h2 class="wp-block-heading" id="How-to-Buy-and-Manage-Azure-Savings-Plans?">How to Buy and Manage Azure Savings Plans?</h2>



<p>According to respondents to <a href="https://info.flexera.com/CM-REPORT-State-of-the-Cloud-2023-Thanks?revisit" target="_blank" rel="noopener">Flexera&#8217;s &#8220;State of the Cloud 2023&#8221; report</a>, cost management is now even more important than cyber security. Over 80% of respondents surveyed say that managing the cost of the cloud is their number 1 challenge. In times of economic concerns, companies are seeking support from FinOps practices or building the Cloud Centers of Excellence.&nbsp;</p>



<p>It&#8217;s worth knowing that Azure savings plans for computing capacities are also available. Let&#8217;s look at what public cloud leader Microsoft offers in their Azure Portal. Below we explain how to gain access to the savings plans.</p>



<h3 class="wp-block-heading">What Azure Savings plan for computing is?</h3>



<p>The idea behind the savings plan for computing is simple: you commit to spending a fixed hourly amount on compute services for <strong>one or three years</strong> and in doing so, you will be able to save up to <strong>65% </strong>on pay-as-you-go prices.</p>



<h3 class="wp-block-heading">How do you buy a savings plan?</h3>



<ol class="wp-block-list">
<li>First, ensure you meet the conditions for purchasing a savings plan &#8211; check your&nbsp;<a href="https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/view-all-accounts#check-the-type-of-your-account" target="_blank" rel="noopener">billing plan</a>.</li>



<li>Note that savings plan discounts are only applicable to resources connected to subscriptions acquired through an&nbsp;<strong>Enterprise Agreement (EA), Microsoft Customer Agreement (MCA), or Microsoft Partner Agreement (MPA).</strong></li>



<li>Once done, proceed to Azure Portal, and navigate to <strong>All services,</strong> and then choose<strong> Savings plans. </strong>From there, click <strong>Add</strong> to buy a saving plan.</li>
</ol>



<h2 class="wp-block-heading" id="Azure-Savings-Plans-Benefits-Explained">Azure Savings Plans Benefits Explained</h2>



<h3 class="wp-block-heading">Understanding Savings Plan Benefits for Compute Usage</h3>



<ul class="wp-block-list">
<li><strong>Working on your terms</strong> &#8211; you decide whether to commit for one year or three years, and determine the hourly plan based on recommendations for past compute usage.&nbsp;&nbsp;</li>



<li><strong>Optimizing consumption of selected services&nbsp; </strong>&#8211; as part of savings plans, lower prices are available for selected services, including Azure Virtual Machines, Azure App Service, container instances, Azure Functions, and more.</li>



<li><strong>Saving even more</strong> &#8211; the longer the commitment time, the greater the benefits and savings. You can combine savings offers, such as with Azure Hybrid Benefit explained below.&nbsp;</li>



<li><strong>&nbsp;Gaining flexibility </strong>&#8211; even though you cannot modify or cancel the commitment, if your usage exceeds estimated figures, you may add another savings plan to cover the extra usage for computing resources.<br>Key differences between Azure Reservation vs. Savings Plan A savings plan is not the only way to save expenditures. Azure gives you various options &#8211; another s is the Azure Reservation service. How do they differ?</li>



<li><strong>Azure Reservation </strong>&#8211; it means a commitment to a specific virtual machine type in a given cloud region. Choose reserved instances for stable workloads or if you do not expect any changes eg. to the cloud region. Azure reserved instances provide cost savings compared to pay-as-you-go pricing and ensure the availability of resources for your specific workload. It&#8217;s important to carefully consider your workload requirements and usage patterns before reserving instances.</li>



<li><strong>Azure Savings Plan </strong>&#8211; it is a commitment to spending a fixed hourly amount in total on compute services. This option is recommended for dynamic workloads. Azure savings plan offers a pricing model that provides significant cost savings on consistent compute usage in Azure regions. By purchasing an Azure Savings plan, you can achieve savings across multiple Azure regions and select compute services covered by the plan.</li>
</ul>



<h2 class="wp-block-heading" id="Getting-the-most-out-of-Azure-Saving-Plans">Getting the most out of Azure Saving Plans</h2>



<p>It&#8217;s a good idea to carry out thorough research on the topic before committing to a specific plan. Below are two popular options.</p>



<h3 class="wp-block-heading">Utilizing Azure Advisor to Leverage Savings Plan Discounts</h3>



<p>Azure Advisor is available for free in the Azure Portal. Using Azure Advisor can be compared to having your cloud assistant, which with Machine Learning capabilities at its service, helps you recognize underutilized resources and ensure optimal usage of virtual machines and their scale sets. Then, the Advisor recommends <strong>shutting down</strong>&nbsp;or&nbsp;<strong>resizing</strong>&nbsp;given resources.<br><br>But Azure Advisor is also helpful in estimating recommendations on appropriate plans for your company. &nbsp;Based on hourly and total on-demand usage costs, calculations are created to help you choose the right option.<br><br>Get inspired to save money with minimal effort and read about this powerful tool in our guide: <a href="https://nearshore-it.eu/articles/azure-advisor/">Azure Advisor – the Microsoft Cost Management Tool.</a></p>



<h2 class="wp-block-heading" id="Utilizing-Azure-Hybrid-Benefit-with-Savings-Plans">Utilizing Azure Hybrid Benefit with Savings Plans</h2>



<p>Azure Hybrid Benefit is&nbsp;<strong>a special licensing offer for Azure cloud migrations.<br>In simple terms, </strong>Azure Hybrid Benefit is a program that allows new Azure customers to use their existing on-premises licenses for certain Microsoft software products. This way, they can reduce the cost of running them in the Azure cloud.</p>



<p>For instance, if someone already has licenses for certain Microsoft software (Windows Server or SQL Server) and wants to move their workloads to the cloud, Azure Hybrid Benefit lets them apply existing licenses and run the same product within Azure.</p>



<p>By utilizing Azure Hybrid Benefit with new Savings Plans, you can potentially save more money on Azure compute services.</p>



<h2 class="wp-block-heading" id="Purchasing-Azure-Savings-plan---summary">Purchasing Azure Savings plan &#8211; summary&nbsp;</h2>



<p>A savings plan allows you to lower cloud costs and optimize selected Azure resources at the same time. These plans cannot be canceled or changed during the contract, so the choice must be well-thought-out depending on the workloads, services, and cloud regions used. Also, you need to consider any planned changes in this context in the future. While there are a lot of knowledge sources on the Microsoft Learn portal, it&#8217;s good to know the opinions of cloud migrations and cost optimization experts. You don&#8217;t have to be alone in making this decision!</p>



<p>Read also:&nbsp;<a href="https://nearshore-it.eu/articles/technologies/azure-cost-management-101-how-to-optimize-cloud-costs/">Azure Cost Management 101. How to optimize cloud costs?</a></p>



<p>Contact our subject matter experts if you want to quickly choose the right plan and start saving money right away.&nbsp;</p>



</style><div class="promotion-box promotion-box--image-left "><div class="tiles latest-news-once"><div class="tile"><div class="tile-image"><img decoding="async" src="https://nearshore-it.eu/wp-content/uploads/2021/11/nearshore_2021.11.25_cover.jpg" alt="nearshore 2021.11.25 cover" title="Level Up Your Cost Management with Microsoft Azure Savings Plans for Compute! 37"></div><div class="tile-content"><p class="entry-title client-name">CLOUD ENGINEERING</p>

<h3>Make the best use of the cloud!</h3>
Enter the digital world taking advantage of our cloud competences!
<a class="btn btn-primary" href="https://nearshore-it.eu/cloud-engineering/" target="_blank" rel="noopener">Get started now!</a>



</div></div></div></div>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://nearshore-it.eu/articles/cost-management-with-microsoft-azure-savings-plans/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
