<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Tech Journal 📚]]></title><description><![CDATA[Founder @CareerPod | SQE @Redhat | Python Developer | Cloud & DevOps Enthusiast | AI / ML Advocate | Tech Enthusiast]]></description><link>https://blog.raghul.in</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 18:03:18 GMT</lastBuildDate><atom:link href="https://blog.raghul.in/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[AI and ML: Understanding the Basics in Simple Words]]></title><description><![CDATA[For a long time, I’ve been fascinated by Artificial Intelligence (AI) and how it’s shaping the world around us. But I’ve noticed that many people struggle with the basics AI/ML terms often feel full of jargon and hard to grasp. So, in this blog, I’ll...]]></description><link>https://blog.raghul.in/ai-and-ml-understanding-the-basics-in-simple-words</link><guid isPermaLink="true">https://blog.raghul.in/ai-and-ml-understanding-the-basics-in-simple-words</guid><category><![CDATA[AI]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[DeepLearning]]></category><category><![CDATA[generative ai]]></category><dc:creator><![CDATA[Raghul M]]></dc:creator><pubDate>Mon, 08 Sep 2025 10:03:17 GMT</pubDate><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757319099271/76a68c52-a786-49ed-af90-7c5521c9bd08.png" alt class="image--center mx-auto" /></p>
<p>For a long time, I’ve been fascinated by Artificial Intelligence (AI) and how it’s shaping the world around us. But I’ve noticed that many people struggle with the basics AI/ML terms often feel full of jargon and hard to grasp. So, in this blog, I’ll break down these concepts in simple words, making it easier for anyone to get started with AI and ML.</p>
<h2 id="heading-what-is-ai">What is AI?</h2>
<p>AI (Artificial Intelligence) is a broad field that focuses on making machines simulate human-like intelligence. It covers multiple areas such as:</p>
<ul>
<li><p><strong>Machine Learning (ML)</strong> – teaching machines to learn from data .</p>
</li>
<li><p><strong>Deep Learning (DL)</strong> – neural networks that handle complex data like images and text .Mimic Humain brains .</p>
</li>
</ul>
<p>Think of AI as the umbrella, and ML/DL as important branches under it.</p>
<h2 id="heading-what-is-machine-learning-ml">What is Machine Learning (ML) ?</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757320726816/1e72f9be-bbf6-44d8-975c-e0e56beff899.png" alt class="image--center mx-auto" /></p>
<p>Machine Learning is a Discipline in computer science where we train machines on data so that they can make predictions without explicit programming . This shift makes ML powerful the machine learns patterns from data and improves over time.</p>
<h3 id="heading-ml-has-two-key-parts-training-and-inference">ML Has Two Key Parts : Training and Inference</h3>
<ol>
<li><p><strong>Training</strong></p>
<ul>
<li><p>In this phase, the model <strong>learns patterns</strong> from data.</p>
</li>
<li><p>You provide the input (features) along with the correct output (labels).</p>
</li>
<li><p>The model adjusts its internal parameters (weights) to minimize errors.</p>
</li>
<li><p><strong>Example:</strong> Training a model with thousands of house price records so it learns how features like size, location, and age affect the price.</p>
</li>
<li><p><strong>Training = Learning phase</strong></p>
</li>
</ul>
</li>
<li><p><strong>Inference</strong></p>
<ul>
<li><p>Once the model is trained, you use it to make predictions on <strong>new, unseen data</strong>.</p>
</li>
<li><p>No labels are given here the model uses what it has already learned.</p>
</li>
<li><p><strong>Example:</strong> Feeding in details of a new house picture to predict its price.</p>
</li>
<li><p><strong>Inference = Using that knowledge to make predictions</strong></p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757320209850/678dc737-71e9-4c78-85b9-acb03d04ce43.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-ml-learning-styles">ML Learning Styles :</h2>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757321252339/cc7c9d52-9584-4d7c-9e2e-3659f6adc797.png" alt class="image--center mx-auto" /></p>
<p>    There are three main learning paradigms (styles) in ML Common to Both Statistical ML and DL :</p>
<ol>
<li><p><strong>Supervised Learning</strong> – Uses labeled data. Example: Predicting if an email is spam or not.</p>
</li>
<li><p><strong>Unsupervised Learning</strong> – Works with unlabeled data. It learns to identify patterns and structures in data without any explicit guidance Example: Grouping customers into clusters.</p>
</li>
<li><p><strong>Reinforcement Learning (RL)</strong> – The model learns by trial and error, receiving rewards or penalties. RL is also used in training modern Large Language Models (LLMs) with a method called RLHF (Reinforcement Learning with Human Feedback). Example : It is used in LLM trainings like GPT , Claude .</p>
</li>
</ol>
<h3 id="heading-machine-learning-has-two-main-tasks">Machine Learning has two main tasks :</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757326103284/4f8b4ec9-69cb-4ab8-a1b3-96e8b3966169.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p><strong>Classification :</strong> It is about predicting categories <strong>(e.g., spam or not spam, cat or dog).</strong></p>
</li>
<li><p><strong>Regression :</strong> Its about predicting a continous numeric value (not a category) . give some important features , predict a number <strong>( eg: House price prediction )</strong> .Its like predicting the next numbers based on the trained numeric data .</p>
</li>
</ul>
<h2 id="heading-structured-vs-unstructured-data">Structured vs. Unstructured Data</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757322560170/a1dffb84-9c99-4390-b666-773846e9464c.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p><strong>Structured Data</strong> → Tabular, numbers, relational databases. (Good for classical ML).</p>
</li>
<li><p><strong>Unstructured Data</strong> → Images, text, audio, video. (Needs Deep Learning).</p>
</li>
<li><p><strong>Semi-Structured Data</strong> → JSON, XML, log files.</p>
</li>
</ul>
<h2 id="heading-deep-learning">Deep learning :</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757325735499/ec8663cf-e4a3-41a6-828e-9ea6a1c7983c.png" alt class="image--center mx-auto" /></p>
<p>. Deep Learning is a subset of Machine Learning that uses <strong>artificial neural networks</strong> to learn from large amounts of data. It mimics the way the human brain processes information recognizing patterns, learning from experience, and making decisions.</p>
<p>The main strength of deep learning lies in its ability to handle <strong>unstructured data</strong> like images, audio, and text, which traditional ML struggles with.</p>
<p><strong>Neural Network Architectures :</strong></p>
<ul>
<li><p>Feed Forward Neural Network ( FNN ) - <strong>Basic prediction tasks</strong> ( <strong>eg;</strong> Predicting house prices based on size, location, etc. )</p>
</li>
<li><p>Recurrent Neural Network ( RNN ) - <strong>Sequences/time-series</strong> (<strong>eg;</strong> Language modeling, time-series forecasting, speech recognition.)</p>
</li>
<li><p>Convolutional Neural Network ( CNN ) - <strong>Images/videos</strong> ( <strong>eg;</strong> Facial recognition, medical image analysis, self-driving cars. )</p>
</li>
<li><p>Transformers - <strong>Advanced text and language tasks</strong> ( <strong>eg;</strong> Machine translation, chatbots, text summarization, code generation. )</p>
</li>
</ul>
<h2 id="heading-statistical-machine-learning-vs-deep-learning">Statistical Machine Learning vs Deep Learning :</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757325380955/31782cb1-516b-4abe-af8c-fb7f2432ff90.png" alt class="image--center mx-auto" /></p>
<p>Both <strong>Statistical Machine Learning</strong> (classic ML) and <strong>Deep Learning</strong> fall under ML, but they shine in different scenarios .</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Aspect</td><td>Statistical ML</td><td>Deep Learning</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Data Type</strong></td><td>Works well with <strong>simple, structured data</strong> (tabular, numeric)</td><td>Best for <strong>complex, unstructured data</strong> (images, text, audio, video)</td></tr>
<tr>
<td><strong>Dataset Size</strong></td><td>Small to medium datasets</td><td>Large-scale, big datasets</td></tr>
<tr>
<td><strong>Features</strong></td><td>Relies on <strong>handcrafted features</strong> (feature engineering is crucial)</td><td>Automatically learns <strong>complex features</strong> from raw data</td></tr>
<tr>
<td><strong>Compute Resources</strong></td><td>Runs on normal CPUs, less computationally heavy</td><td>Requires GPUs/TPUs and high compute power</td></tr>
<tr>
<td><strong>Interpretability</strong></td><td>Easy to interpret and explain results</td><td>Harder to interpret (acts like a black box)</td></tr>
<tr>
<td><strong>Examples</strong></td><td>Logistic Regression, Decision Trees, Random Forest</td><td>CNNs, RNNs, Transformers</td></tr>
</tbody>
</table>
</div><h3 id="heading-conclusion">Conclusion :</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757325622734/967f28cd-9d11-4cb0-a21e-6810e0e61902.gif" alt class="image--center mx-auto" /></p>
<p>AI and ML may sound filled with heavy jargon, but once you break them down, the core ideas are simple and exciting. From <strong>statistical ML handling structured data</strong> to <strong>deep learning powering today’s breakthroughs in images, speech, and large language models</strong>, both approaches play a key role in shaping the intelligent systems we use daily.</p>
<p>Connect with me on Linkedin: <a target="_blank" href="https://www.linkedin.com/in/m-raghul/"><strong>Raghul M</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[vLLM ? The Simple Guide for Non-Devs and Curious Minds]]></title><description><![CDATA[Large language models (LLMs) like ChatGPT, LLaMA, and Mistral are incredibly powerful, but they're also resource-hungry. They need lots of memory and processing power to respond to a single prompt, let alone handle multiple users. So how do you run a...]]></description><link>https://blog.raghul.in/vllm-the-simple-guide-for-non-devs-and-curious-minds</link><guid isPermaLink="true">https://blog.raghul.in/vllm-the-simple-guide-for-non-devs-and-curious-minds</guid><category><![CDATA[vLLM]]></category><category><![CDATA[AI]]></category><category><![CDATA[redhat]]></category><category><![CDATA[technology]]></category><category><![CDATA[Computer Science]]></category><dc:creator><![CDATA[Raghul M]]></dc:creator><pubDate>Thu, 31 Jul 2025 17:48:36 GMT</pubDate><content:encoded><![CDATA[<p><img src="https://www.varutra.com/ctp/Resources/img/Critical-Remote-Code-Execution-Vulnerability-in-vLLM-via-Mooncake-Integration.jpg" alt="Critical Remote Code Execution Vulnerability in vLLM via Mooncake  (CVE-2025-29783) Patched" class="image--center mx-auto" /></p>
<p>Large language models (LLMs) like ChatGPT, LLaMA, and Mistral are incredibly powerful, but they're also resource-hungry. They need lots of memory and processing power to respond to a single prompt, let alone handle multiple users. So how do you run a big LLM efficiently especially if you want to host it yourself ?</p>
<p>Thats where <strong>vLLM</strong> plays an vital role an open-source engine designed to serve large language models efficiently, quickly, and at scale.</p>
<p>This blog is your plain-layman term guide to understanding what vLLM is, how it works, and why it's a game-changer for running LLMs.</p>
<hr />
<h2 id="heading-what-is-vllm-and-why-should-you-care">What Is vLLM (And Why Should You Care)?</h2>
<p>Imagine you want to build your own chatbot, just like ChatGPT, but hosted on your own machine or cloud. You need it to:</p>
<ul>
<li><p>Handle long conversations</p>
</li>
<li><p>Support multiple users at once</p>
</li>
<li><p>Be fast and responsive</p>
</li>
</ul>
<p><strong>vLLM</strong> ("Virtualized LLM") is a backend engine that makes this possible. It works under the hood to serve models like LLaMA , Qwen and Mistral while keeping GPU memory usage efficient and response times low.</p>
<p><strong>Let’s break down the main ideas behind vLLM in simple terms:</strong></p>
<h3 id="heading-1-tokens">1. <strong>Tokens</strong></h3>
<p>LLMs don’t understand words directly. They split your input into smaller units called <strong>tokens</strong>. For example, "chatbot" might become "chat" + "bot".</p>
<h3 id="heading-2-attention">2. <strong>Attention</strong></h3>
<p>When the model generates the next word, it looks back at previous tokens and decides which ones matter most. This is called <strong>attention</strong>.</p>
<h3 id="heading-3-kv-key-value-cache">3. <strong>KV (Key-Value) Cache</strong></h3>
<p>As the model processes input, it saves information about each token from past querys into a memory bank. This is the <strong>KV cache</strong>, which lets the model remember the conversation.</p>
<h3 id="heading-4-pagedattention-the-magic">4. <strong>PagedAttention</strong> (The Magic)</h3>
<p>Normally, the KV cache grows as the conversation gets longer. That eats up GPU memory fast. <strong>PagedAttention</strong> solves this by:</p>
<ul>
<li><p>Storing memory in chunks (called pages)</p>
</li>
<li><p>Swapping pages in and out of GPU as needed</p>
</li>
</ul>
<p>It’s like working at a small desk: you keep only the important notes on your desk and file away the rest, pulling them out only when you need them.</p>
<h3 id="heading-5-vllm-engine">5. <strong>vLLM Engine</strong></h3>
<p>This is the smart part of the system. It:</p>
<ul>
<li><p>Loads the model</p>
</li>
<li><p>Tokenizes the input</p>
</li>
<li><p>Manages the KV cache using PagedAttention</p>
</li>
<li><p>Streams the output</p>
</li>
</ul>
<p>All while keeping GPU usage low and performance high.</p>
<h3 id="heading-6-openai-compatible-api">6. <strong>OpenAI-Compatible API</strong></h3>
<p>vLLM exposes an API that looks exactly like OpenAI's API Endpoints .</p>
<p>Refer this : <a target="_blank" href="https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html">https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html</a></p>
<hr />
<h2 id="heading-why-vllm-matters">Why vLLM Matters</h2>
<p>Here’s why developers and organizations are excited about vLLM:</p>
<ul>
<li><p><strong>Fast</strong> - Generates responses quickly, even for long chats.</p>
</li>
<li><p><strong>Scalable</strong> - Can handle multiple users at once.</p>
</li>
<li><p><strong>Memory-efficient</strong> - Thanks to PagedAttention.</p>
</li>
<li><p><strong>Easy to integrate</strong> - Compatible with OpenAI-style APIs.</p>
</li>
</ul>
<p>If you want to build apps like ChatGPT, or host your own LLMs securely, vLLM is the engine you want.</p>
<hr />
<h2 id="heading-a-simple-chatbot-flow-with-vllm">A Simple Chatbot Flow with vLLM</h2>
<ol>
<li><p><img src="https://media4.giphy.com/media/v1.Y2lkPTc5MGI3NjExM2Z6emI4aXhxMDhka2c2cmQxaGRrODN6dTI0bGd6ZzUxMWUwYXl4aCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/6ZVAVQMXppKtnD9oIT/giphy.gif" alt class="image--center mx-auto" /></p>
<p> User asks: "Tell me a joke."</p>
</li>
<li><p>Input gets tokenized.</p>
</li>
<li><p>Model checks previous tokens (if any) using attention.</p>
</li>
<li><p>vLLM loads needed memory pages.</p>
</li>
<li><p>Response is generated and streamed back.</p>
</li>
</ol>
<p>And it does this <strong>fast</strong>, even if you’re chatting with multiple users.</p>
<hr />
<h2 id="heading-conclusion">Conclusion</h2>
<p><img src="https://media2.giphy.com/media/v1.Y2lkPTc5MGI3NjExc3hld2Z6dWVjbDJkanR5OG82c3U4dXN0cGlkbDB0ejJsNGN5NmdoeiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/1tuooPJnx3kKqBmNEW/giphy.gif" alt class="image--center mx-auto" /></p>
<p><strong>vLLM = fast, memory-efficient LLM serving engine with an OpenAI-like API.</strong></p>
<p>You now understand:</p>
<ul>
<li><p>What attention, KV cache, and PagedAttention mean</p>
</li>
<li><p>Why vLLM is better than regular model serving</p>
</li>
<li><p>How it fits into chatbot pipelines</p>
</li>
</ul>
<p>If you're building with LLMs and want speed, scale, and control, vLLM is 100% worth checking out.</p>
<p>Connect with me on Linkedin: <a target="_blank" href="https://www.linkedin.com/in/m-raghul/"><strong>Raghul M</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[Understanding Reverse Proxy and Forward Proxy — The Simple Way]]></title><description><![CDATA[When we browse the internet, a lot happens behind the scenes. One such hidden hero is the proxy. Whether it’s keeping your identity safe, speeding things up, or managing requests, proxies silently work in the background to make your online experience...]]></description><link>https://blog.raghul.in/understanding-reverse-proxy-and-forward-proxy-the-simple-way</link><guid isPermaLink="true">https://blog.raghul.in/understanding-reverse-proxy-and-forward-proxy-the-simple-way</guid><category><![CDATA[Devops]]></category><category><![CDATA[networking]]></category><category><![CDATA[technology]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[nginx]]></category><dc:creator><![CDATA[Raghul M]]></dc:creator><pubDate>Sun, 11 May 2025 09:05:43 GMT</pubDate><content:encoded><![CDATA[<p>When we browse the internet, a lot happens behind the scenes. One such hidden hero is the <strong>proxy</strong>. Whether it’s keeping your identity safe, speeding things up, or managing requests, proxies silently work in the background to make your online experience smoother. Let's break it all down.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746954250723/827440c4-abf7-4d94-bfbd-1cb5f1bb0593.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-what-is-a-proxy">What Is a Proxy?</h2>
<p>A <strong>proxy</strong> is like a middleman between you and the internet. Instead of directly connecting to a website, your request first goes to a proxy server. That server then talks to the website on your behalf.</p>
<p>Imagine you’re at a party and want to ask someone a question, but instead of going directly, you ask your friend to pass the message. Your friend is the <strong>proxy</strong>.</p>
<h2 id="heading-why-use-a-proxy">Why Use a Proxy?</h2>
<p>There are many reasons to use a proxy:</p>
<ul>
<li><p>To <strong>hide your identity (IP address)</strong></p>
</li>
<li><p>To <strong>filter content</strong> (like blocking social media at work)</p>
</li>
<li><p>To <strong>speed up browsing</strong> using cached content</p>
</li>
<li><p>To <strong>add a layer of security</strong> and control over traffic</p>
</li>
</ul>
<div data-node-type="callout">
<div data-node-type="callout-emoji">📌</div>
<div data-node-type="callout-text">Now, proxies come in two main flavors: <strong>Reverse Proxy</strong> and <strong>Forward Proxy</strong>. Let’s explore both.</div>
</div>

<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746953460886/9845dac9-78a8-4eb3-8f82-ff089bf0242f.webp" alt class="image--center mx-auto" /></p>
<h2 id="heading-what-is-a-reverse-proxy">What Is a Reverse Proxy?</h2>
<p>A <strong>Reverse Proxy</strong> sits in front of one or more servers and handles requests <strong>from clients (like users)</strong> to those servers.</p>
<p>Think of it like a receptionist in an office. When visitors come in (users), the receptionist decides which employee (server) should handle the request.</p>
<h3 id="heading-why-use-a-reverse-proxy">✅ Why Use a Reverse Proxy?</h3>
<ul>
<li><p><strong>Load Balancing</strong> – It distributes traffic to different servers to avoid overload.</p>
</li>
<li><p><strong>Security</strong> – It hides the internal structure of your servers from outsiders.</p>
</li>
<li><p><strong>Caching</strong> – It can store frequent responses and send them faster.</p>
</li>
<li><p><strong>SSL Termination</strong> – It handles HTTPS encryption, reducing work on backend servers.</p>
</li>
</ul>
<h2 id="heading-what-is-a-forward-proxy">What Is a Forward Proxy?</h2>
<p>A <strong>Forward Proxy</strong> sits in front of <strong>the client (user)</strong> and makes requests <strong>to websites</strong> on the user’s behalf.</p>
<p>Imagine you're in a country where a website is blocked. You tell your friend in another country to access it and send it to you. That friend is acting as your <strong>forward proxy</strong>.</p>
<h3 id="heading-why-use-a-forward-proxy">✅ Why Use a Forward Proxy?</h3>
<ul>
<li><p><strong>Bypass restrictions</strong> – Like accessing blocked websites.</p>
</li>
<li><p><strong>Hide your IP address</strong> – Helps with anonymity.</p>
</li>
<li><p><strong>Content filtering</strong> – Schools and offices use it to block certain sites.</p>
</li>
<li><p><strong>Monitoring</strong> – Organizations track employee usage.</p>
</li>
</ul>
<h3 id="heading-differences-between-forward-and-reverse-proxy">Differences Between Forward and Reverse Proxy</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Feature</strong></td><td><strong>Forward Proxy</strong></td><td><strong>Reverse Proxy</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Who uses it?</td><td>The <strong>client</strong> (user)</td><td>The <strong>server</strong> side</td></tr>
<tr>
<td>Purpose</td><td>Access control &amp; privacy for users</td><td>Load balancing, caching &amp; security</td></tr>
<tr>
<td>Hides identity of</td><td>The <strong>user</strong></td><td>The <strong>server</strong></td></tr>
<tr>
<td>Example Use Case</td><td>Accessing a blocked website</td><td>Managing multiple backend servers</td></tr>
</tbody>
</table>
</div><h2 id="heading-why-are-proxies-important-in-devops">Why Are Proxies Important? (In DevOps)</h2>
<p>In DevOps, proxies are crucial for:</p>
<ul>
<li><p>🔐 <strong>Security</strong> – Hide backend servers and control access.</p>
</li>
<li><p>⚖️ <strong>Load Balancing</strong> – Distribute traffic across services.</p>
</li>
<li><p>🚀 <strong>Performance</strong> – Cache content to speed up responses.</p>
</li>
<li><p>🧪 <strong>Build Control</strong> – Restrict internet access in CI/CD.</p>
</li>
<li><p>🧩 <strong>Microservices</strong> – Enable smooth, secure communication.</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746953845341/1022567f-bf4a-41c8-ae61-7b90ab500304.gif" alt class="image--center mx-auto" /></p>
<p>Proxies aren’t just networking tools — they’re a <strong>core part of modern DevOps</strong>. From security to scalability, proxies help teams build reliable and efficient systems.</p>
<ul>
<li><p>✅ <strong>Forward Proxy</strong>: Acts <strong>on behalf of the client</strong> (user). It hides the client’s identity from the server.</p>
</li>
<li><p>✅ <strong>Reverse Proxy</strong>: Acts <strong>on behalf of the server</strong>. It hides the server’s identity from the client.</p>
</li>
</ul>
<p>In the next post, we'll explore <strong>hands-on proxy setups and real-world use cases using NGINX</strong>. Stay tuned!</p>
<p>Connect with me on Linkedin: <a target="_blank" href="https://www.linkedin.com/in/m-raghul/"><strong>Raghul M</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[How I Turned an Old Laptop into a Local AI Server (for Free!) with Ollama + Cloudflare Tunnel]]></title><description><![CDATA[Introduction :
I had an old laptop and a wild idea…
Most GenAI tools need expensive GPUs or cloud credits. I had neither. So, I asked myself can I run a language model locally, without the cloud, and still make it accessible from anywhere?
Turns out,...]]></description><link>https://blog.raghul.in/local-ai-server-for-free-with-ollama-cloudflare-tunnel</link><guid isPermaLink="true">https://blog.raghul.in/local-ai-server-for-free-with-ollama-cloudflare-tunnel</guid><category><![CDATA[AI]]></category><category><![CDATA[ollama]]></category><category><![CDATA[mlops]]></category><category><![CDATA[chatbot]]></category><category><![CDATA[#model-deployment]]></category><dc:creator><![CDATA[Raghul M]]></dc:creator><pubDate>Fri, 11 Apr 2025 15:10:08 GMT</pubDate><content:encoded><![CDATA[<h3 id="heading-introduction">Introduction :</h3>
<p>I had an old laptop and a wild idea…</p>
<p>Most GenAI tools need expensive GPUs or cloud credits. I had neither. So, I asked myself <em>can I run a language model locally, without the cloud, and still make it accessible from anywhere?</em></p>
<p>Turns out, yes and it was surprisingly fun. Here’s exactly how I built a self-hosted AI server using <strong>Ollama</strong> and <strong>Cloudflare Tunnel</strong>, step-by-step.</p>
<hr />
<h3 id="heading-project-overview">Project overview :</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744383436128/30d0d0f9-c880-4147-bced-1d69f2d5bb68.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-hardware-amp-os-setup">Hardware &amp; OS Setup 💻</h3>
<p>I used an old laptop with the following specs:</p>
<ul>
<li><p><strong>Storage</strong>: 465 GB</p>
</li>
<li><p><strong>RAM</strong>: 4 GB</p>
</li>
<li><p><strong>OS</strong>: Xubuntu (lightweight and efficient) / Linux Mint (Any Linux you can use )</p>
</li>
</ul>
<p>Why Xubuntu/Linux Mint?</p>
<ul>
<li><p>Low memory usage</p>
</li>
<li><p>Fast performance on older hardware</p>
</li>
<li><p>Easy to set up and supports modern tools</p>
</li>
</ul>
<hr />
<h3 id="heading-installing-ollama">Installing Ollama :</h3>
<p><strong>Ollama</strong> is a powerful CLI tool that allows you to run and interact with language models locally. Here's how I installed it:</p>
<pre><code class="lang-bash">$ curl -fsSL https://ollama.com/install.sh | sh
</code></pre>
<p>Then I pulled a lightweight model for fast performance:</p>
<pre><code class="lang-bash">$ ollama pull tinyllama
</code></pre>
<p>I later tried <code>deepseek-r1:1.5b</code> and it worked great too!</p>
<hr />
<h3 id="heading-serving-the-llm">🔄 Serving the LLM</h3>
<p>To serve the model and make it accessible:</p>
<pre><code class="lang-bash">$ OLLAMA_HOST=0.0.0.0 ollama serve
</code></pre>
<p><strong>Note</strong>: If you run into <code>address already in use</code>, try a different port using:</p>
<pre><code class="lang-bash">$ OLLAMA_HOST=0.0.0.0 OLLAMA_PORT=11435 ollama serve
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744381328036/8dc9940f-240d-499c-9907-fa4b38aaef48.jpeg" alt class="image--center mx-auto" /></p>
<p>Verify it's working: It will list all the models available</p>
<pre><code class="lang-bash">$ curl http://localhost:11435/api/tags
</code></pre>
<hr />
<h3 id="heading-exposing-ollma-localhost-with-cloudflare-tunnel">Exposing Ollma Localhost with Cloudflare Tunnel</h3>
<p>To make the local server publicly accessible, I used Cloudflare Tunnel. Open new terminal and install Cloudflared tunnel</p>
<pre><code class="lang-bash">$ wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64 -o cloudflared
$ chmod +x cloudflared
</code></pre>
<p>To verify the installation Run this command :</p>
<pre><code class="lang-bash">$ cloudflared --version
</code></pre>
<p>Once verified lets expose the localhost using Cloudflared</p>
<pre><code class="lang-bash">$ cloudflared tunnel --url http://localhost:11435
</code></pre>
<p>This gives you a public URL like:</p>
<pre><code class="lang-plaintext">https://your-unique-subdomain.trycloudflare.com
</code></pre>
<p>Now your Ollama model is accessible from anywhere securely! You can use it via curl or api from any application .</p>
<pre><code class="lang-bash">curl https://your-unique-subdomain.trycloudflare.com/api/generate -d <span class="hljs-string">'{
  "model": "deepseek-r1r:1.5b",
  "prompt": "Write a Python function to reverse a string"
}'</span>
</code></pre>
<hr />
<h3 id="heading-creating-a-streamlit-frontend">🌐 Creating a Streamlit Frontend</h3>
<p>I built a Streamlit app to interact with the model easily:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> streamlit <span class="hljs-keyword">as</span> st
<span class="hljs-keyword">import</span> requests

<span class="hljs-comment"># --- Config ---</span>
OLLAMA_BASE_URL = <span class="hljs-string">"https://your-unique-subdomain.trycloudflare.com"</span>  <span class="hljs-comment"># or your tunnel domain</span>
TAGS_URL = <span class="hljs-string">f"<span class="hljs-subst">{OLLAMA_BASE_URL}</span>/api/tags"</span>
GENERATE_URL = <span class="hljs-string">f"<span class="hljs-subst">{OLLAMA_BASE_URL}</span>/api/generate"</span>

<span class="hljs-comment"># --- Page Settings ---</span>
st.set_page_config(page_title=<span class="hljs-string">"Ollama Chat"</span>, layout=<span class="hljs-string">"centered"</span>)
st.title(<span class="hljs-string">"🧠 Ollama Chat Interface"</span>)

<span class="hljs-comment"># --- Fetch Models ---</span>
<span class="hljs-meta">@st.cache_data</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">fetch_models</span>():</span>
    <span class="hljs-keyword">try</span>:
        response = requests.get(TAGS_URL)
        <span class="hljs-keyword">if</span> response.status_code == <span class="hljs-number">200</span>:
            data = response.json()
            models = [model[<span class="hljs-string">"name"</span>] <span class="hljs-keyword">for</span> model <span class="hljs-keyword">in</span> data.get(<span class="hljs-string">"models"</span>, [])]
            <span class="hljs-keyword">return</span> models
        <span class="hljs-keyword">else</span>:
            st.error(<span class="hljs-string">f"Failed to fetch models: <span class="hljs-subst">{response.status_code}</span>"</span>)
            <span class="hljs-keyword">return</span> []
    <span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
        st.error(<span class="hljs-string">f"Error fetching models: <span class="hljs-subst">{e}</span>"</span>)
        <span class="hljs-keyword">return</span> []

<span class="hljs-comment"># --- UI: Select Model ---</span>
models = fetch_models()
<span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> models:
    st.warning(<span class="hljs-string">"No models available. Please load a model into Ollama."</span>)
    st.stop()

selected_model = st.selectbox(<span class="hljs-string">"📦 Choose a model:"</span>, models)

<span class="hljs-comment"># --- UI: Prompt Input ---</span>
prompt = st.text_area(<span class="hljs-string">"💬 Enter your prompt:"</span>, height=<span class="hljs-number">200</span>)

<span class="hljs-keyword">if</span> st.button(<span class="hljs-string">"🚀 Generate Response"</span>):
    <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> prompt.strip():
        st.warning(<span class="hljs-string">"Prompt cannot be empty."</span>)
    <span class="hljs-keyword">else</span>:
        <span class="hljs-keyword">with</span> st.spinner(<span class="hljs-string">"Generating response..."</span>):
            payload = {
                <span class="hljs-string">"model"</span>: selected_model,
                <span class="hljs-string">"prompt"</span>: prompt,
                <span class="hljs-string">"stream"</span>: <span class="hljs-literal">False</span>
            }
            <span class="hljs-keyword">try</span>:
                response = requests.post(GENERATE_URL, json=payload)
                <span class="hljs-keyword">if</span> response.status_code == <span class="hljs-number">200</span>:
                    result = response.json()
                    st.markdown(<span class="hljs-string">"### ✅ Response"</span>)
                    st.write(result.get(<span class="hljs-string">"response"</span>, <span class="hljs-string">"No response received."</span>))
                <span class="hljs-keyword">else</span>:
                    st.error(<span class="hljs-string">f"Error <span class="hljs-subst">{response.status_code}</span>: <span class="hljs-subst">{response.text}</span>"</span>)
            <span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
                st.error(<span class="hljs-string">f"Request failed: <span class="hljs-subst">{e}</span>"</span>)
</code></pre>
<hr />
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744381294110/b617d213-84ee-4212-bc6e-51b1b8a1ccd0.jpeg" alt class="image--center mx-auto" /></p>
<h3 id="heading-bonus-tips">📊 Bonus Tips</h3>
<ul>
<li><p>Use <code>htop</code> or <code>glances</code> to monitor memory and CPU.</p>
</li>
<li><p>Check disk usage with <code>lsblk</code> and <code>df -h</code>.</p>
</li>
<li><p>Use lightweight models for fast inference on low-end machines.</p>
</li>
</ul>
<h3 id="heading-what-you-can-build">🚀 What You Can Build</h3>
<ul>
<li><p>Personal chatbot</p>
</li>
<li><p>Code generation tool</p>
</li>
<li><p>Lightweight AI backend for your apps</p>
</li>
<li><p>Home AI server on a budget</p>
</li>
</ul>
<hr />
<h3 id="heading-conclusion">📖 Conclusion :</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744380503276/2adb7b36-921b-45f5-9005-2e79265ebf9e.gif" alt class="image--center mx-auto" /></p>
<p>This project proved something awesome . that you don’t need top-tier hardware to experiment with AI. With a bit of creativity and the right tools, even an old laptop can become a powerful AI playground.</p>
<p>Feel free to check out the <a target="_blank" href="https://chatgpt.com/c/67f7c681-5514-8003-8869-379b0c863647">GitHub repository</a> for the complete setup.</p>
<p>If you try this out or want help replicating it – feel free to reach out! <a target="_blank" href="https://www.linkedin.com/in/m-raghul/">Raghul M</a></p>
]]></content:encoded></item><item><title><![CDATA[Ollama & OpenWebUI Setup Guide: Run LLMs Locally with Ease]]></title><description><![CDATA[Introduction

Image source : https://www.packetswitch.co.uk/
Ollama is an open-source tool designed to help developers run and develop large language models (LLMs) locally on their machines. It enables efficient AI model execution without relying on ...]]></description><link>https://blog.raghul.in/ollama-and-openwebui-setup-guide-run-llms-locally-with-ease</link><guid isPermaLink="true">https://blog.raghul.in/ollama-and-openwebui-setup-guide-run-llms-locally-with-ease</guid><category><![CDATA[ollama]]></category><category><![CDATA[Python]]></category><category><![CDATA[llm]]></category><category><![CDATA[AI]]></category><category><![CDATA[openai]]></category><dc:creator><![CDATA[Raghul M]]></dc:creator><pubDate>Fri, 21 Mar 2025 10:21:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1742550977555/5c4f1af1-1c0e-4ef0-9d31-ca08659eac35.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742551937701/d09cd498-e42e-43db-8ddb-24ce9fadc966.png" alt class="image--center mx-auto" /></p>
<p>Image source : <a target="_blank" href="https://www.packetswitch.co.uk/">https://www.packetswitch.co.uk/</a></p>
<p>Ollama is an open-source tool designed to help developers run and develop large language models (LLMs) locally on their machines. It enables efficient AI model execution without relying on cloud-based services, ensuring cost-effectiveness, privacy, and performance.</p>
<p>🔗 <strong>Download Ollama</strong>: <a target="_blank" href="https://ollama.com/download">Ollama Official Website</a></p>
<hr />
<h2 id="heading-ollama-commands">Ollama Commands</h2>
<h3 id="heading-1-running-a-model">1. Running a Model:</h3>
<pre><code class="lang-bash">$ ollama run &lt;model-name&gt;
</code></pre>
<ul>
<li>Checks if the model is available locally; if not, it pulls it from the Ollama model registry and run the model</li>
</ul>
<h3 id="heading-2-starting-the-api-server">2. Starting the API Server:</h3>
<pre><code class="lang-bash">$ ollama serve
</code></pre>
<ul>
<li>Runs the Ollama API on port <strong>11434</strong>.</li>
</ul>
<h3 id="heading-3-verifying-model-integrity">3. Verifying Model Integrity:</h3>
<pre><code class="lang-bash">$ ollama check &lt;model-name&gt;
</code></pre>
<ul>
<li>Uses SHA-256 digest to confirm authenticity.</li>
</ul>
<h3 id="heading-4-listing-available-models">4. Listing Available Models:</h3>
<pre><code class="lang-bash">$ ollama list
</code></pre>
<ul>
<li>Displays the locally available models.</li>
</ul>
<h3 id="heading-5-pulling-a-model-from-the-registry">5. Pulling a Model from the Registry:</h3>
<pre><code class="lang-bash">$ ollama pull &lt;model-name&gt;
</code></pre>
<ul>
<li>Downloads a model from the Ollama registry.</li>
</ul>
<h3 id="heading-6-pushing-a-model-to-the-registry">6. Pushing a Model to the Registry:</h3>
<pre><code class="lang-bash">$ ollama push username:&lt;model-name&gt;
</code></pre>
<ul>
<li>Uploads a custom model to the Ollama registry.</li>
</ul>
<h3 id="heading-7-creating-a-custom-model">7. Creating a Custom Model:</h3>
<pre><code class="lang-bash">$ ollama create my_custom_model -f ./Modelfile
</code></pre>
<ul>
<li>Builds a custom model based on the configuration in a <strong>Modelfile</strong>.</li>
</ul>
<h3 id="heading-8-check-avilable-commands">8. Check avilable commands:</h3>
<pre><code class="lang-bash">$ /?

Available Commands:
  /<span class="hljs-built_in">set</span>            Set session variables
  /show           Show model information
  /load &lt;model&gt;   Load a session or model
  /save &lt;model&gt;   Save your current session
  /clear          Clear session context
  /<span class="hljs-built_in">bye</span>            Exit
  /?, /<span class="hljs-built_in">help</span>       Help <span class="hljs-keyword">for</span> a <span class="hljs-built_in">command</span>
  /? shortcuts    Help <span class="hljs-keyword">for</span> keyboard shortcuts
</code></pre>
<hr />
<h2 id="heading-rest-api-endpoints">REST API Endpoints</h2>
<p>Ollama provides REST API endpoints to interact with models programmatically.</p>
<h3 id="heading-ollama-rest-api-endpoints">Ollama REST API endpoints :</h3>
<pre><code class="lang-http"><span class="hljs-attribute">POST /api/generate   # Generates responses (supports streaming option).
POST /api/chat       # Chat Interaction - Handles chat-based interactions.
GET /api/models      # Returns a list of installed models.
POST /api/pull       # Pulls a model from the Ollama registry
POST /api/push.      # Uploads a locally built model to the Ollama registry.
GET /api/status.     # Returns the current status of the running Ollama API.</span>
</code></pre>
<hr />
<h2 id="heading-creating-and-managing-custom-models">Creating and Managing Custom Models</h2>
<h3 id="heading-creating-a-custom-image">Creating a Custom Image</h3>
<p>To create a custom model, follow these steps:</p>
<ol>
<li>Create a <strong>Modelfile</strong> with the necessary configurations:</li>
</ol>
<pre><code class="lang-plaintext">FROM BASE_MODEL:TAG
PARAMETER temperature 0.2
SYSTEM "You are a helpful assistant."
MESSAGE "Welcome to Ollama!"
</code></pre>
<ol start="2">
<li>Build the model:</li>
</ol>
<pre><code class="lang-bash">$ ollama create my_custom_model -f ./Modelfile
</code></pre>
<ol start="3">
<li>Pushing a Custom Model to the Registry</li>
</ol>
<pre><code class="lang-bash">$ ollama push my_custom_model
</code></pre>
<ul>
<li>This command uploads the locally built model to the Ollama registry.</li>
</ul>
<h3 id="heading-pulling-a-model-from-the-ollama-registry">Pulling a Model from the Ollama Registry</h3>
<pre><code class="lang-bash">$ ollama pull username:my_custom_model
</code></pre>
<ul>
<li>Downloads and installs a model from the Ollama registry for local use.</li>
</ul>
<hr />
<h2 id="heading-using-ollama-with-python">Using Ollama with Python</h2>
<p>Ollama can be integrated with Python to generate AI-powered responses. Below is an example using the <code>requests</code> library:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> ollama

response = ollama.generate(
    model=<span class="hljs-string">"deepseek-r1"</span>,
    prompt=<span class="hljs-string">"What is Deepseek?"</span>
)

print(response[<span class="hljs-string">"response"</span>])
</code></pre>
<ul>
<li>This script sends a request to the Ollama API running locally, generates a response using the specified model, and prints the output.</li>
</ul>
<hr />
<h2 id="heading-setting-up-openwebui-with-ollama">Setting Up OpenWebUI with Ollama</h2>
<p><a target="_blank" href="https://github.com/OpenWebUI/OpenWebUI">OpenWebUI</a> is a user-friendly interface that enhances the experience of interacting with LLMs like Ollama.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742552151631/0dbcea52-0b5f-4583-8fcc-5d7fdbdc68a2.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-running-openwebui-with-docker">Running OpenWebUI with Docker</h3>
<p>You can easily set up OpenWebUI using Docker with the following command:</p>
<pre><code class="lang-bash">$ docker run -d --name open-webui -p 3000:3000 -e OLLAMA_BASE_URL=http://localhost:11434 --restart always ghcr.io/open-webui/open-webui:main
</code></pre>
<ul>
<li><p>This runs OpenWebUI as a Docker container and connects it to your local Ollama instance.</p>
</li>
<li><p>Access OpenWebUI at <code>http://localhost:3000</code>.</p>
</li>
</ul>
<hr />
<h2 id="heading-conclusion">Conclusion :</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742552427461/50102451-7beb-4f5a-8a04-6bb3c953a9ae.webp" alt class="image--center mx-auto" /></p>
<p>Ollama provides a powerful way to run LLMs locally, ensuring privacy, performance, and cost-effectiveness. With its easy-to-use CLI commands, REST API support, Python integration, and OpenWebUI interface, it’s a great tool for developers looking to build AI applications without relying on cloud-based services.</p>
<p>🚀 <strong>Ready to explore Ollama?</strong> Download it today and start running your own AI models locally!</p>
<p>Connect with me on Linkedin: <a target="_blank" href="https://www.linkedin.com/in/m-raghul/"><strong>Raghul M</strong></a></p>
<p><a target="_blank" href="https://www.linkedin.com/in/m-raghul/">💬 Hav</a>e questions or feedback? Drop them in the comments below!</p>
]]></content:encoded></item><item><title><![CDATA[Understanding Embeddings : How AI Learns Meaning from Text, Images, and Data]]></title><description><![CDATA[Hey Everyone 👋🏻 !
In my previous blog, we explored how Transformers work and how they revolutionized modern AI, paving the way for major advancements.
Today, let's dive into embeddings—the foundation of Large Language Models (LLMs). We'll cover how...]]></description><link>https://blog.raghul.in/understanding-embeddings-how-ai-learns-meaning-from-text-images-and-data</link><guid isPermaLink="true">https://blog.raghul.in/understanding-embeddings-how-ai-learns-meaning-from-text-images-and-data</guid><category><![CDATA[AI]]></category><category><![CDATA[nlp]]></category><category><![CDATA[embedding]]></category><category><![CDATA[ML]]></category><category><![CDATA[technology]]></category><category><![CDATA[Computer Science]]></category><dc:creator><![CDATA[Raghul M]]></dc:creator><pubDate>Wed, 05 Feb 2025 13:23:34 GMT</pubDate><content:encoded><![CDATA[<p><img src="https://miro.medium.com/v2/resize:fit:1400/1*QZFroonLyTxtOaA6AAImmw.png" alt="A Complete Guide to Creating and Storing Vector Embeddings! | by Pavan  Belagatti | Level Up Coding" /></p>
<p><strong>Hey Everyone</strong> 👋🏻 <strong>!</strong></p>
<p>In my previous blog, we explored how <strong>Transformers work</strong> and how they <strong>revolutionized modern AI</strong>, paving the way for major advancements.</p>
<p>Today, let's dive into <strong>embeddings</strong>—the <strong>foundation of Large Language Models (LLMs)</strong>. We'll cover <strong>how embeddings work, different types of embeddings, and their applications.</strong> Let's get started!</p>
<h3 id="heading-what-is-an-embedding"><strong>What is an Embedding?</strong></h3>
<p>An <strong>embedding</strong> is a <strong>numerical representation</strong> of data (words, sentences, images, videos, audio, documents) in a <strong>vector format</strong> within a <strong>multidimensional space</strong>. These representations <strong>capture meaning and relationships</strong> between data points.</p>
<p>It is also known as a vector embedding.</p>
<p>📌 <strong>Example:</strong></p>
<p><img src="https://miro.medium.com/1*BJ9yksA-xmubGhrIdAybDQ@2x.png" alt="Word Vector Mathematics Concept
(image by author)" /></p>
<p><em>Image sources :</em> <a target="_blank" href="https://towardsdatascience.com/whats-behind-word2vec-95e3326a833a/">towardsdatascience.com</a></p>
<p>Words with similar meanings—like <strong>"king" and "queen"</strong>—will have embeddings that are <strong>closer together</strong> in vector space.</p>
<hr />
<h2 id="heading-types-of-embeddings"><strong>Types of Embeddings</strong></h2>
<ol>
<li><h3 id="heading-word-embeddings"><strong>Word Embeddings</strong></h3>
</li>
</ol>
<ul>
<li><p>Represent individual words in a multi-dimensional space.</p>
</li>
<li><p>Capture relationships between words.</p>
</li>
</ul>
<p>📌 <strong>Models:</strong> Word2Vec, GloVe, FastText (by Facebook)<br />📌 <strong>Use Cases:</strong> Machine translation, chatbots, search engines</p>
<ol start="2">
<li><h3 id="heading-text-embeddings"><strong>Text Embeddings</strong></h3>
</li>
</ol>
<ul>
<li>Represent longer texts (phrases, sentences, paragraphs, or documents) as vectors.</li>
</ul>
<p>📌 <strong>Models:</strong> BERT, Paragraph2Vec<br />📌 <strong>Use Cases:</strong> Text classification, sentiment analysis</p>
<blockquote>
<p><strong>Note:</strong></p>
<ul>
<li><p><strong>Word embeddings</strong> focus on <strong>individual words</strong>.</p>
</li>
<li><p><strong>Text embeddings</strong> capture the <strong>meaning of entire texts</strong>.</p>
</li>
</ul>
</blockquote>
<ol start="3">
<li><h3 id="heading-sentence-embeddings"><strong>Sentence Embeddings</strong></h3>
</li>
</ol>
<ul>
<li><p>Represent entire <strong>sentences</strong> in vector form.</p>
</li>
<li><p>Capture <strong>both meaning and context</strong>.</p>
</li>
<li><p>Similar sentences have embeddings that are <strong>closer together</strong>.</p>
</li>
</ul>
<p>📌 <strong>Models:</strong> Sentence-BERT (SBERT), Universal Sentence Encoder (USE) [by Google], Infersent<br />📌 <strong>Use Cases:</strong> Semantic search, text retrieval</p>
<blockquote>
<p><strong>Note:</strong> Sentence embeddings are a type of <strong>text embedding</strong> but specifically focus on <strong>entire sentences</strong></p>
</blockquote>
<ol start="4">
<li><h3 id="heading-image-embeddings"><strong>Image Embeddings</strong></h3>
</li>
</ol>
<ul>
<li><p>Convert <strong>images</strong> into <strong>feature vectors</strong>.</p>
</li>
<li><p>Helps in <strong>image similarity searches &amp; object recognition</strong>.</p>
</li>
</ul>
<p>📌 <strong>Models:</strong> CNN (ResNet, VGG, CLIP)<br />📌 <strong>Use Cases:</strong> Image search, object detection</p>
<ol start="5">
<li><h3 id="heading-graph-embeddings"><strong>Graph Embeddings</strong></h3>
</li>
</ol>
<ul>
<li><p>Represent <strong>nodes, edges, or entire graphs</strong> as vectors.</p>
</li>
<li><p>Used in <strong>social networks, fraud detection, recommendation systems</strong>.</p>
</li>
</ul>
<p>📌 <strong>Models:</strong> Node2Vec, GraphSAGE<br />📌 <strong>Use Cases:</strong> Fraud detection, social network analysis</p>
<ol start="6">
<li><h3 id="heading-video-embeddings"><strong>Video Embeddings</strong></h3>
</li>
</ol>
<ul>
<li>Convert <strong>both spatial (image) and temporal (motion) features</strong> into a <strong>meaningful vector sequence</strong>.</li>
</ul>
<p>📌 <strong>Models:</strong> C3D, CLIP (for video)<br />📌 <strong>Use Cases:</strong> Video search, activity recognition</p>
<ol start="7">
<li><h3 id="heading-audio-embeddings"><strong>Audio Embeddings</strong></h3>
</li>
</ol>
<ul>
<li><p>Convert <strong>sound waves into vector representations</strong>.</p>
</li>
<li><p>Capture <strong>pitch, tone, and speech meaning</strong>.</p>
</li>
</ul>
<p>📌 <strong>Models:</strong> <strong>Wav2Vec, OpenL3</strong><br />📌 <strong>Use Cases:</strong> Speech recognition, music classification</p>
<blockquote>
<p><strong>Note:</strong> <strong>All these embeddings fall under vector embeddings.</strong></p>
</blockquote>
<hr />
<h2 id="heading-shared-embedding-space"><strong>Shared Embedding Space</strong></h2>
<p>A <strong>shared embedding space</strong> is a common vector space where <strong>different types of data</strong> (e.g., text &amp; images) are mapped <strong>close together</strong> if they are related.</p>
<p><img src="https://www.dailydoseofds.com/content/images/2024/12/image.png" alt /></p>
<p><em>Image sources :</em> <a target="_blank" href="https://www.dailydoseofds.com/a-crash-course-on-building-rag-systems-part-5-with-implementation/">dailydoseofds.com</a></p>
<p>📌 <strong>Example:</strong> <strong>CLIP (Contrastive Language-Image Pretraining)</strong><br />- Developed by <strong>OpenAI</strong> to create a shared embedding space for <strong>images &amp; text</strong>.<br />- Allows images and textual descriptions to be <strong>compared directly</strong>.</p>
<blockquote>
<p><strong>Use Case:</strong> You can <strong>search for images using text descriptions!</strong></p>
</blockquote>
<hr />
<h2 id="heading-applications-of-embeddings"><strong>Applications of Embeddings :</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738759812131/6ebb539c-290f-49eb-9bbf-646e567400f8.jpeg" alt class="image--center mx-auto" /></p>
<p><em>Image sources :</em> <a target="_blank" href="https://www.lyzr.ai/glossaries/contextual-embeddings/">lyzr.ai</a></p>
<ul>
<li><p><strong>Large Language Models (LLMs):</strong> Convert input tokens into <strong>token embeddings</strong>.</p>
</li>
<li><p><strong>Semantic Search:</strong> Retrieves <strong>similar sentences</strong> to improve <strong>search relevance</strong>.</p>
</li>
<li><p><strong>RAG (Retrieval-Augmented Generation):</strong> Uses <strong>sentence embeddings</strong> to retrieve relevant text.</p>
</li>
<li><p><strong>Recommendations:</strong> Finds <strong>similar products</strong> using <strong>vector search</strong>.</p>
</li>
<li><p><strong>Anomaly Detection:</strong> Identifies unusual patterns in data.</p>
</li>
</ul>
<hr />
<h2 id="heading-famous-word-embedding-models"><strong>Famous Word Embedding Models :</strong></h2>
<p><img src="https://miro.medium.com/v2/resize:fit:1400/1*SaTpzUFhBIFW71Dxy_49Vw.png" alt /></p>
<p><em>Image sources :</em> <a target="_blank" href="https://medium.com/@punya8147_26846/unlocking-the-power-of-vector-embeddings-a-beginners-guide-to-their-types-and-applications-3b092a49516c">medium.com</a></p>
<ul>
<li><p><strong>Word2Vec</strong> → Predicts a word based on surrounding words (<strong>developed by Google</strong>).</p>
</li>
<li><p><strong>GloVe</strong> → Similar to Word2Vec but with a different mathematical approach (<strong>by Stanford</strong>).</p>
</li>
</ul>
<hr />
<h3 id="heading-lets-understank-tokens">Lets Understank Tokens :</h3>
<p><img src="https://cdn.prod.website-files.com/61e7d259b7746e3f63f0b6be/6630e466c569a5f73cd81c9e_Understanding%20LLM%20Billing_%20From%20Characters%20to%20Tokens.jpg" alt="Understanding LLM Billing: From Characters to Tokens | Eden AI" /></p>
<p><em>Image sources :</em> <a target="_blank" href="https://www.edenai.co/post/understanding-llm-billing-from-characters-to-tokens">edenai.co</a></p>
<h3 id="heading-what-is-a-token"><strong>What is a Token?</strong></h3>
<p>A <strong>token</strong> is a small unit of text used in NLP models. It can be:</p>
<ul>
<li><p>A <strong>word</strong> (e.g., "cat")</p>
</li>
<li><p>A <strong>subword</strong> (e.g., "play" and "ing" in "playing")</p>
</li>
<li><p>A <strong>character</strong> (e.g., "C", "a", "t")</p>
</li>
<li><p>A <strong>symbol or punctuation</strong> (e.g., "!")</p>
</li>
</ul>
<blockquote>
<p><strong>Example:</strong><br />- Sentence: <code>"I love AI!"</code><br />- Tokens: <code>["I", "love", "AI", "!"]</code></p>
</blockquote>
<h3 id="heading-token-vs-embedding"><strong>Token vs. Embedding</strong></h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Feature</strong></td><td><strong>Token</strong></td><td><strong>Embedding</strong></td></tr>
</thead>
<tbody>
<tr>
<td><strong>Definition</strong></td><td>A unit of text (word, subword, character)</td><td>A numeric vector representing meaning</td></tr>
<tr>
<td><strong>Format</strong></td><td>Text</td><td>Numbers (vector)</td></tr>
<tr>
<td><strong>Example</strong></td><td><code>"cat"</code> → Token</td><td><code>"cat"</code> → <code>[0.23, 0.87, -0.45, ...]</code></td></tr>
</tbody>
</table>
</div><blockquote>
<ul>
<li><p><strong>Tokens</strong> help models read text.</p>
</li>
<li><p><strong>Embeddings</strong> help models understand meaning.</p>
</li>
</ul>
</blockquote>
<hr />
<p><strong>Example Code: How Embeddings Works :</strong> <a target="_blank" href="https://github.com/Raghul-M/GenAI/blob/main/Embedings/Token-Embeddings.ipynb">https://github.com/Raghul-M/GenAI/blob/main/Embedings/Token-Embeddings.ipynb</a></p>
<h2 id="heading-conclusion">Conclusion :</h2>
<p><img src="https://media.tenor.com/images/e5c21d98f56c4af119b4e14b6a9df893/tenor.gif" alt="The Matrix Recoded: Fan Fiction Movie Sequel Pitch | xcaliber | Commaful" class="image--center mx-auto" /></p>
<p>Embeddings have become a fundamental building block in modern AI, enabling machines to understand and represent complex data in a way that drives advancements in natural language processing, search, and more. Their ability to convert words, sentences, or documents into numerical vectors allows for more efficient and accurate tasks like similarity search and classification. As technology evolves, embeddings will continue to play a crucial role in shaping the future of AI.</p>
<p>Connect with me on Linkedin: <a target="_blank" href="https://www.linkedin.com/in/m-raghul/"><strong>Raghul M</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[Understanding Transformers: The Backbone of Modern AI]]></title><description><![CDATA[Introduction :

Hey everyone 👋🏻 !
Let me Introduce Transformers wait a minute, not the movie! Though, I have to admit, the Transformers movies are pretty cool, especially the Autobots and Megatrons . But today, I’m here to introduce something even ...]]></description><link>https://blog.raghul.in/understanding-transformers-the-backbone-of-modern-ai</link><guid isPermaLink="true">https://blog.raghul.in/understanding-transformers-the-backbone-of-modern-ai</guid><category><![CDATA[nlp transformers]]></category><category><![CDATA[AI]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[genai]]></category><category><![CDATA[Computer Science]]></category><dc:creator><![CDATA[Raghul M]]></dc:creator><pubDate>Fri, 31 Jan 2025 12:09:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738326112417/4379a9bc-aa6f-4bfc-9ed1-6ba959c585c8.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction :</h1>
<p><img src="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F7485933d-06de-47d1-a3f1-734b0d379df4_1200x630.jpeg" alt="How do Transformers Work in NLP? A Guide to the Latest State-of-the-Art  Models" /></p>
<p>Hey everyone 👋🏻 !</p>
<p>Let me Introduce <strong>Transformers</strong> wait a minute, not the movie! Though, I have to admit, the Transformers movies are pretty cool, especially the Autobots and Megatrons . But today, I’m here to introduce something even cooler the <strong>Transformer model</strong>, which has completely changed the AI industry.</p>
<p>The Transformer model and its architecture were proposed by a group of Google researchers in the 2017 paper <a target="_blank" href="https://arxiv.org/pdf/1706.03762"><em>Attention Is All You Need</em></a>. This innovation <strong>revolutionized</strong> the entire AI landscape. These models power state-of-the-art <strong>Natural Language Processing (NLP)</strong> applications, including <strong>GPT, BERT, and T5</strong>. Unlike traditional models like <strong>RNNs and LSTMs</strong>, Transformers leverage the <strong>self-attention mechanism</strong> to process data more efficiently, leading to groundbreaking advancements in <strong>machine learning and artificial intelligence</strong>.</p>
<p>In this blog, I will break down <strong>Transformers, Transformer architecture, its components (encoders, decoders, attention mechanisms), and its impact on AI</strong>. I will also provide a <strong>hands-on example using the Gensim library</strong>. No need to worry—it won’t be too mathematical!</p>
<h2 id="heading-understanding-some-fundamentals">Understanding Some Fundamentals</h2>
<p>Before diving into Transformers, let’s first understand some key concepts.</p>
<h3 id="heading-what-is-a-language-model">What is a Language Model?</h3>
<p><img src="https://miro.medium.com/v2/resize:fit:867/1*_MrDp6w3Xc-yLuCTbco0xw.png" alt="The basics of Language Modeling. Notes from CS224n lesson 6 and 7. | by  Antonio Lopardo | Medium" class="image--right mx-auto mr-0" /></p>
<p>A <strong>language model</strong> is essentially a system that <strong>predicts the next word in a sentence</strong>. For example:</p>
<ul>
<li><p>Google's popular language model is <strong>BERT</strong>.</p>
</li>
<li><p>OpenAI's <strong>ChatGPT</strong> is based on the <strong>GPT</strong> model.</p>
</li>
</ul>
<p>GPT is called a <strong>large language model (LLM)</strong> because it is trained on <strong>billions of parameters</strong>, making it incredibly powerful. The primary goal of a language model is to <strong>predict the next word in a sentence</strong> accurately.</p>
<h3 id="heading-word-embeddings-and-tokens">Word Embeddings and Tokens</h3>
<p>Machine learning models don't understand text directly; instead, they work with <strong>numerical representations</strong> known as <strong>word embeddings</strong>. Before feeding text into a Transformer model, words are broken down into <strong>tokens</strong> and transformed into embeddings.</p>
<p><strong>For example:</strong></p>
<ul>
<li>The phrase <strong>"river bank"</strong> and <strong>"financial bank"</strong> will have different embeddings, even though they share the word <em>bank</em>.</li>
</ul>
<h4 id="heading-tokens">Tokens</h4>
<p><img src="https://curator-production.s3.us.cloud-object-storage.appdomain.cloud/uploads/course-v1:IBMSkillsNetwork+GPXX0A7BEN+v1.jpg" alt="LLM Foundations: Get started with tokenization" class="image--center mx-auto" /></p>
<p>Tokens are the smallest units of text used in <strong>Natural Language Processing (NLP)</strong>. The process of breaking text into these smaller units is called <strong>tokenization</strong>.</p>
<p><strong>Example:</strong></p>
<ul>
<li>"unbelievable" → ["un", "believable"]</li>
</ul>
<h2 id="heading-why-have-transformers-revolutionized-ai">Why Have Transformers Revolutionized AI?</h2>
<p>Transformers have <strong>redefined AI</strong> due to several key factors:</p>
<ol>
<li><p><strong>Parallel Processing</strong> – Unlike RNNs, which process words <strong>sequentially</strong>, Transformers analyze <strong>the entire input at once</strong>, making them significantly faster.</p>
</li>
<li><p><strong>Better Context Understanding</strong> – Transformers capture <strong>long-range dependencies</strong>, allowing them to understand language better than traditional models.</p>
</li>
<li><p><strong>Scalability</strong> – Models like <strong>GPT-4 and BERT</strong> demonstrate how well Transformers scale with <strong>massive datasets</strong>.</p>
</li>
<li><p><strong>Versatility</strong> – Used in chatbots, translation, summarization, text generation, image processing, and more.</p>
</li>
</ol>
<hr />
<h2 id="heading-transformer-architecture-a-high-level-overview">Transformer Architecture: A High-Level Overview</h2>
<p><img src="https://miro.medium.com/v2/resize:fit:1200/0*YvXO4YstJyCFegxK.png" alt="Transformer Architecture (NLP). From an Natural Language Processing… | by  Anmol Talwar | Medium" /></p>
<p>Transformers follow an <strong>encoder-decoder</strong> architecture. Here’s a simplified breakdown:</p>
<ul>
<li><p><strong>Encoder</strong>: Takes the input sentence, generates embeddings for each word/token, and produces <strong>contextual embeddings</strong>.</p>
</li>
<li><p><strong>Decoder</strong>: Uses the contextual embeddings to <strong>predict the next word</strong>, generating an output with the highest probability.</p>
</li>
</ul>
<h3 id="heading-variants-of-transformers">Variants of Transformers</h3>
<ul>
<li><p><strong>Transformer</strong>: Generic encoder-decoder architecture.</p>
</li>
<li><p><strong>BERT</strong>: Only has an <strong>encoder</strong>.</p>
</li>
<li><p><strong>GPT</strong>: Only has a <strong>decoder</strong>.</p>
</li>
</ul>
<h3 id="heading-understanding-encoders-and-decoders">Understanding Encoders and Decoders</h3>
<h4 id="heading-encoder">Encoder:</h4>
<ul>
<li><p>Converts input tokens into <strong>meaningful representations</strong>.</p>
</li>
<li><p>Uses <strong>self-attention</strong> to understand relationships between words.</p>
</li>
<li><p>Stacks <strong>multiple layers</strong> for deep feature extraction.</p>
</li>
</ul>
<h4 id="heading-decoder">Decoder:</h4>
<ul>
<li><p>Takes encoder outputs and <strong>generates predictions</strong>.</p>
</li>
<li><p>Uses <strong>self-attention + cross-attention</strong> to ensure coherence in output.</p>
</li>
</ul>
<h3 id="heading-static-vs-contextual-embeddings">Static vs. Contextual Embeddings</h3>
<ul>
<li><p><strong>Static Embeddings</strong>: Pre-trained word representations like <strong>Word2Vec, GloVe</strong>.</p>
</li>
<li><p><strong>Contextual Embeddings</strong>: Transformer-generated dynamic embeddings that <strong>change based on context</strong>.</p>
</li>
</ul>
<p><strong>Example:</strong></p>
<ul>
<li><em>"bank"</em> in <em>"river bank"</em> vs. <em>"financial bank"</em> will have different embeddings in Transformer models.</li>
</ul>
<h2 id="heading-attention-mechanism-the-heart-of-transformers">Attention Mechanism: The Heart of Transformers</h2>
<ol>
<li><p><strong>Self-Attention</strong>:</p>
<ul>
<li><p>Each word <strong>attends</strong> to every other word in a sentence.</p>
</li>
<li><p>Helps the model understand <strong>context efficiently</strong>.</p>
</li>
</ul>
</li>
<li><p><strong>Multi-Head Attention</strong>:</p>
<ul>
<li><p>Instead of using <strong>a single attention mechanism</strong>, Transformers use <strong>multiple parallel attention heads</strong>.</p>
</li>
<li><p>Each head captures different aspects of meaning.</p>
</li>
</ul>
</li>
<li><p><strong>Cross-Attention</strong>:</p>
<ul>
<li>Used in the decoder to <strong>attend to encoder outputs</strong>, ensuring context-rich responses.</li>
</ul>
</li>
</ol>
<h3 id="heading-how-text-is-converted-into-output-step-by-step">How Text is Converted into Output (Step-by-Step)</h3>
<ol>
<li><p><strong>Tokenization</strong>: Text is broken into smaller units.</p>
</li>
<li><p><strong>Embedding</strong>: Tokens are converted into numerical vectors.</p>
</li>
<li><p><strong>Positional Encoding</strong>: Adds information about word order.</p>
</li>
<li><p><strong>Self-Attention &amp; Multi-Head Attention</strong>: Captures contextual relationships.</p>
</li>
<li><p><strong>Feed-Forward Network</strong>: Processes extracted features.</p>
</li>
<li><p><strong>Output Generation</strong>: Decoder produces meaningful text.</p>
</li>
</ol>
<hr />
<h2 id="heading-hands-on-example-using-gensim-for-word-embeddings">Hands-on Example: Using Gensim for Word Embeddings</h2>
<p>Before diving into Transformer-based models, let’s see how <strong>word embeddings</strong> work with Gensim.</p>
<h3 id="heading-step-1-install-gensim">Step 1: Install Gensim</h3>
<pre><code class="lang-bash">pip install gensim
</code></pre>
<h3 id="heading-step-2-train-a-word2vec-model">Step 2: Train a Word2Vec Model</h3>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> gensim.models <span class="hljs-keyword">import</span> Word2Vec

<span class="hljs-comment"># Sample dataset</span>
sentences = [[<span class="hljs-string">'machine'</span>, <span class="hljs-string">'learning'</span>, <span class="hljs-string">'is'</span>, <span class="hljs-string">'fun'</span>], [<span class="hljs-string">'deep'</span>, <span class="hljs-string">'learning'</span>, <span class="hljs-string">'is'</span>, <span class="hljs-string">'powerful'</span>]]

<span class="hljs-comment"># Train Word2Vec model</span>
model = Word2Vec(sentences, vector_size=<span class="hljs-number">50</span>, window=<span class="hljs-number">3</span>, min_count=<span class="hljs-number">1</span>, workers=<span class="hljs-number">4</span>)

<span class="hljs-comment"># Get similar words</span>
print(model.wv.most_similar(<span class="hljs-string">'learning'</span>))
</code></pre>
<hr />
<h2 id="heading-conclusion">Conclusion :</h2>
<p>Transformers have <strong>revolutionized AI</strong>, enabling state-of-the-art <strong>NLP applications</strong>. Their ability to <strong>process large datasets, understand context deeply, and handle long-range dependencies</strong> makes them the <strong>go-to choice</strong> for modern AI systems. With <strong>further advancements</strong>, Transformers will continue to shape the future of AI. 🚀</p>
<p>Connect with me on Linkedin: <a target="_blank" href="https://www.linkedin.com/in/m-raghul/"><strong>Raghul M</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[Python Webapp Deployment on Heroku Using GitHub Actions]]></title><description><![CDATA[This project demonstrates how to deploy a simple Python web application using Flask to Heroku deployment, with automated testing using pytest and deployment using GitHub Actions.
Github Repo link : 🔗Heroku Deployment using Github Actions
Project Des...]]></description><link>https://blog.raghul.in/python-webapp-deployment-on-heroku-using-github-actions</link><guid isPermaLink="true">https://blog.raghul.in/python-webapp-deployment-on-heroku-using-github-actions</guid><category><![CDATA[Heroku]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[github-actions]]></category><category><![CDATA[Python]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[ci-cd]]></category><dc:creator><![CDATA[Raghul M]]></dc:creator><pubDate>Tue, 16 Jul 2024 15:56:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1721145471330/8350209b-5bde-4ce4-a148-71478df7cf0d.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1721144964372/6235cd69-db3c-48e1-99ea-7a6561382f40.png" alt class="image--center mx-auto" /></p>
<p>This project demonstrates how to deploy a simple Python web application using Flask to Heroku deployment, with automated testing using pytest and deployment using GitHub Actions.</p>
<p><strong>Github Repo link : 🔗</strong><a target="_blank" href="https://github.com/Raghul-M/Python-Github_Actions-Heroku">Heroku Deployment using Github Actions</a></p>
<h3 id="heading-project-description">Project Description</h3>
<p>This project includes:</p>
<ul>
<li><p>A basic Flask web application.</p>
</li>
<li><p>Tests written in pytest.</p>
</li>
<li><p>Automated deployment to Heroku using GitHub Actions.</p>
</li>
</ul>
<h3 id="heading-prerequisites">Prerequisites</h3>
<ul>
<li><p>Python 3.10+</p>
</li>
<li><p>Git</p>
</li>
<li><p>GitHub account</p>
</li>
<li><p>Heroku account</p>
</li>
<li><p>Heroku CLI</p>
</li>
</ul>
<h3 id="heading-local-setup">Local Setup</h3>
<ol>
<li><p><strong>Clone the repository:</strong></p>
<pre><code class="lang-sh"> git <span class="hljs-built_in">clone</span> https://github.com/Raghul-M/Python-Github_Actions-Heroku.git
 <span class="hljs-built_in">cd</span> Python-Github_Actions-Heroku
</code></pre>
</li>
<li><p><strong>Create a virtual environment:</strong></p>
<pre><code class="lang-sh"> python -m venv venv
 <span class="hljs-built_in">source</span> venv/bin/activate  <span class="hljs-comment"># On Windows use `venv\Scripts\activate`</span>
</code></pre>
</li>
<li><p><strong>Install dependencies:</strong></p>
<pre><code class="lang-python"> pip install -r requirements.txt
</code></pre>
</li>
<li><p><strong>Run the application locally:</strong></p>
<pre><code class="lang-sh"> python3 app.py
</code></pre>
</li>
</ol>
<h3 id="heading-running-tests">Running Tests</h3>
<p><strong>Run tests with pytest:</strong></p>
<pre><code class="lang-sh">pytest
</code></pre>
<p><img src="https://github.com/Raghul-M/Python-Github_Actions-Heroku/assets/71755586/f824b763-f8db-44fa-902b-0aead8c918df" alt="Screenshot from 2024-06-28 15-29-45" /></p>
<h2 id="heading-deployment">Deployment</h2>
<p><strong>Heroku Setup</strong></p>
<p><img src="https://github.com/Raghul-M/Python-Github_Actions-Heroku/assets/71755586/2d0e3693-8991-40d7-b487-06050c70ad7a" alt="Screenshot from 2024-06-28 15-40-58" /></p>
<ol>
<li><p><strong>Login to Heroku:</strong></p>
<pre><code class="lang-sh"> heroku login
</code></pre>
</li>
<li><p><strong>Create a new Heroku app:</strong></p>
<pre><code class="lang-sh"> heroku create your-app-name
</code></pre>
</li>
<li><p><strong>Set up GitHub Actions for Heroku deployment:</strong></p>
<ul>
<li><p>Go to your GitHub repository.</p>
</li>
<li><p>Navigate to <code>Settings</code> &gt; <code>Secrets</code> &gt; <code>New repository secret</code>.</p>
</li>
<li><p>Add the following secrets:</p>
<ul>
<li><p><code>HEROKU_API_KEY</code>: Your Heroku API key.</p>
</li>
<li><p><code>HEROKU_APP_NAME</code>: Your Heroku app name.</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Add the GitHub Actions workflow file (</strong><code>.github/workflows/deploy.yml</code>):</p>
<pre><code class="lang-yaml"> <span class="hljs-attr">name:</span> <span class="hljs-string">Python</span> <span class="hljs-string">application</span>
 <span class="hljs-attr">on:</span>
  <span class="hljs-attr">push:</span>
    <span class="hljs-attr">branches:</span> [ <span class="hljs-string">"main"</span> ]
  <span class="hljs-attr">pull_request:</span>
    <span class="hljs-attr">branches:</span> [ <span class="hljs-string">"main"</span> ]
  <span class="hljs-attr">permissions:</span>
    <span class="hljs-attr">contents:</span> <span class="hljs-string">read</span>
  <span class="hljs-attr">jobs:</span>
    <span class="hljs-attr">build:</span>

  <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>

  <span class="hljs-attr">steps:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v4</span>
    <span class="hljs-attr">with:</span>
      <span class="hljs-attr">fetch-depth:</span> <span class="hljs-number">0</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Set</span> <span class="hljs-string">up</span> <span class="hljs-string">Python</span> <span class="hljs-number">3.10</span>
    <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/setup-python@v3</span>
    <span class="hljs-attr">with:</span>
      <span class="hljs-attr">python-version:</span> <span class="hljs-string">"3.10"</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">dependencies</span>
    <span class="hljs-attr">run:</span> <span class="hljs-string">|
      python -m pip install --upgrade pip
      pip install flake8 pytest
      if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
</span>  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Lint</span> <span class="hljs-string">with</span> <span class="hljs-string">flake8</span>
    <span class="hljs-attr">run:</span> <span class="hljs-string">|
      # stop the build if there are Python syntax errors or undefined names
      flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
      # exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
      flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
</span>  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Test</span> <span class="hljs-string">with</span> <span class="hljs-string">pytest</span>
    <span class="hljs-attr">run:</span> <span class="hljs-string">|
      pytest
</span>
  <span class="hljs-attr">deploy:</span>
  <span class="hljs-attr">needs:</span> <span class="hljs-string">build</span>
  <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
  <span class="hljs-attr">steps:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v2</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">uses:</span> <span class="hljs-string">akhileshns/heroku-deploy@v3.12.12</span>
    <span class="hljs-attr">with:</span>
        <span class="hljs-attr">heroku_api_key:</span> <span class="hljs-string">${{secrets.HEROKU_API_TOKEN}}</span>
        <span class="hljs-attr">heroku_app_name:</span> <span class="hljs-string">${{secrets.HEROKU_APP_NAME}}</span> <span class="hljs-comment">#Must be unique in Heroku</span>
        <span class="hljs-attr">heroku_email:</span> <span class="hljs-string">${{secrets.HEROKU_EMAIL}}</span>
</code></pre>
</li>
<li><p><strong>Create a Procfile</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1721145169473/88cea368-c6d7-4453-8fad-29ee54523899.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Create a Runtime file</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1721145208054/779b0230-0c76-4888-8b6d-f3715b42ab8f.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h3 id="heading-output">Output :</h3>
<p><strong>Localhost</strong></p>
<p><img src="https://github.com/Raghul-M/Python-Github_Actions-Heroku/assets/71755586/1a8a00c3-deba-4e39-bfd0-6b5f72601337" alt="Screenshot from 2024-06-28 15-36-18" /></p>
<p><strong>Deployed App on Heroku</strong></p>
<p><img src="https://github.com/Raghul-M/Python-Github_Actions-Heroku/assets/71755586/9b08a9fa-63cb-4a0e-bb79-7ba8144880c0" alt="Screenshot from 2024-06-28 15-37-23" /></p>
<h3 id="heading-contributing"><strong>Contributing</strong></h3>
<p>Contributions are welcome! If you have suggestions, bug reports, or want to add new features, feel free to submit a pull request.</p>
<p>Feel free to explore, contribute, and adapt this project to suit your needs. If you encounter any issues or have suggestions for improvement, please raise them in the GitHub repository's issues section. Happy coding! 🚀</p>
<p>Connect with me on Linkedin: <a target="_blank" href="https://www.linkedin.com/in/m-raghul/"><strong>Raghul M</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[Docker Streamlit Application Deployment Using GitHub Actions on AWS Self-Hosted Runner]]></title><description><![CDATA[This project demonstrates a CI/CD pipeline using GitHub Actions to build a Dockerized Streamlit Python application and deploy it to an AWS EC2 Self Hosted Runner.
Github Repo link🔗Docker Deployment using Github Actions
Project Workflow :

Code Commi...]]></description><link>https://blog.raghul.in/docker-streamlit-application-deployment-using-github-actions-on-aws-self-hosted-runner</link><guid isPermaLink="true">https://blog.raghul.in/docker-streamlit-application-deployment-using-github-actions-on-aws-self-hosted-runner</guid><category><![CDATA[githubaction]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Docker]]></category><dc:creator><![CDATA[Raghul M]]></dc:creator><pubDate>Tue, 16 Jul 2024 09:21:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1721135694276/ef595596-ffa8-4dd4-b724-01a0e2af93a1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1721120656991/03e142bb-7b2f-4276-9c07-190880956c78.png" alt class="image--center mx-auto" /></p>
<p>This project demonstrates a CI/CD pipeline using GitHub Actions to build a Dockerized Streamlit Python application and deploy it to an AWS EC2 Self Hosted Runner.</p>
<p><strong>Github Repo link</strong><a target="_blank" href="https://emojipedia.org/link">🔗</a><a target="_blank" href="https://github.com/Raghul-M/Docker_Github-Actions_AWS-App/">Docker Deployment using Github Actions</a></p>
<p><strong>Project Workflow :</strong></p>
<ol>
<li><p><strong>Code Commit:</strong> Developers push code changes to the <code>main</code> branch.</p>
</li>
<li><p><strong>CI Build Job</strong>: GitHub Actions triggers a build job (<code>build.yml</code>) upon code commit.</p>
</li>
<li><p><strong>Docker Image Build</strong>: The CI job builds a Docker image from the Dockerfile and pushes it to Docker Hub upon successful build.</p>
</li>
<li><p><strong>CD Deploy Job</strong>: After the image is pushed, another GitHub Actions job (<code>deploy.yml</code>) is triggered.</p>
</li>
<li><p><strong>Deployment to AWS EC2</strong>:</p>
<ul>
<li><p>The deploy job uses a self-hosted runner on an AWS EC2 instance.</p>
</li>
<li><p>It pulls the latest Docker image from Docker Hub.</p>
</li>
<li><p>The image is then run as a container on the EC2 instance.</p>
</li>
</ul>
</li>
</ol>
<p><img src="https://github.com/Raghul-M/Docker_Github-Actions_AWS-App/assets/71755586/060fa7d3-5506-4cb7-afbd-bd3c90c21936" alt="Screenshot from 2024-06-29 13-38-02" /></p>
<p><strong>Repository Structure:</strong></p>
<ul>
<li><p><code>.github/workflows/</code>: Contains GitHub Actions workflows.</p>
<ul>
<li><p><code>build.yml</code>: Defines CI build steps.</p>
</li>
<li><p><code>deploy.yml</code>: Defines CD deployment steps.</p>
</li>
</ul>
</li>
<li><p><code>Dockerfile</code>: Configuration for building the Docker image.</p>
</li>
<li><p><a target="_blank" href="http://README.md"><code>README.md</code></a>: Project documentation (you are currently reading this file).</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1721121142586/f884d0b2-464a-41ab-a83e-d7069608a40f.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p><strong>Prerequisites:</strong></p>
<p>Before running the CI/CD pipeline, ensure you have set up the following:</p>
<ul>
<li><p><strong>GitHub Repository</strong>: Configure secrets for Docker Hub credentials (<code>DOCKER_USERNAME</code>, <code>DOCKER_PASSWORD</code>) as Repo Secrets.</p>
</li>
<li><p><strong>Docker Hub Account</strong>: Repository for storing Docker images.</p>
</li>
<li><p><strong>AWS EC2 Instance</strong>: Ensure the instance is running and accessible via SSH and Configured Self-Hosted Runner</p>
</li>
</ul>
<p><img src="https://private-user-images.githubusercontent.com/71755586/344378945-2fc78c78-2f88-47b7-909c-d4a49a4fb220.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjExMjE0MTEsIm5iZiI6MTcyMTEyMTExMSwicGF0aCI6Ii83MTc1NTU4Ni8zNDQzNzg5NDUtMmZjNzhjNzgtMmY4OC00N2I3LTkwOWMtZDRhNDlhNGZiMjIwLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MTYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzE2VDA5MTE1MVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWVjMGY2M2M0N2RjMmQyNmY4ODAwY2UwMzAwYzdiMGZhYjU3YzJkNzkxYjg3NDAxZGVlMTc3MjIyNGQ2MzUyMmUmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.vGSAVVlQVxzX2FKr2amlv77X1sOUw1sLau__bCLHZd0" alt="Screenshot from 2024-06-29 13-39-57" /></p>
<p><strong>Final Deployment :</strong></p>
<p><img src="https://private-user-images.githubusercontent.com/71755586/344378991-c0cf36d7-9807-4d35-aee9-8e65c1fd4bb7.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjExMjE0MTEsIm5iZiI6MTcyMTEyMTExMSwicGF0aCI6Ii83MTc1NTU4Ni8zNDQzNzg5OTEtYzBjZjM2ZDctOTgwNy00ZDM1LWFlZTktOGU2NWMxZmQ0YmI3LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MTYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzE2VDA5MTE1MVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWQ5NzRlOTNmZWY5ODEzY2Y3YTcwMDk5ZThjNWVhMzRjYzg1ZDdmMWE2OTNkZjI4MTlhM2U4Y2ViMjZiZTU5MzImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.x5VCdpgzfR7g1dxfaoHar9FJkDCivqLJI6TFvKWCZL0" alt="Screenshot from 2024-06-29 13-25-03" /></p>
<p><strong>Notes</strong></p>
<ul>
<li><p>Ensure that your Dockerfile is configured correctly to build and run your Python application.</p>
</li>
<li><p>Regularly monitor and maintain your AWS EC2 instance to ensure proper functioning of the deployed application.</p>
</li>
</ul>
<h2 id="heading-contributing">Contributing</h2>
<p>Contributions are welcome! If you have suggestions, bug reports, or want to add new features, feel free to submit a pull request.</p>
<p>Feel free to explore, contribute, and adapt this project to suit your needs. If you encounter any issues or have suggestions for improvement, please raise them in the GitHub repository's issues section. Happy coding! 🚀</p>
<p>Connect with me on Linkedin: <a target="_blank" href="https://www.linkedin.com/in/m-raghul/">Raghul M</a></p>
]]></content:encoded></item><item><title><![CDATA[Open Source 101: A Developer's Blueprint for Getting Started]]></title><description><![CDATA[Welcome to the exciting world of open-source development! Whether you're a seasoned coder or a programming enthusiast looking to dive into collaborative coding, this guide is your comprehensive blueprint to kickstart your journey.
What is Opensource ...]]></description><link>https://blog.raghul.in/open-source-101-a-developers-blueprint-for-getting-started</link><guid isPermaLink="true">https://blog.raghul.in/open-source-101-a-developers-blueprint-for-getting-started</guid><category><![CDATA[Open Source]]></category><category><![CDATA[Beginner Developers]]></category><category><![CDATA[Programming Blogs]]></category><category><![CDATA[GitHub]]></category><dc:creator><![CDATA[Raghul M]]></dc:creator><pubDate>Tue, 16 Jul 2024 06:37:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1721111935037/6bd596ab-0551-4d91-90db-6a68d008a82c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Welcome to the exciting world of open-source development! Whether you're a seasoned coder or a programming enthusiast looking to dive into collaborative coding, this guide is your comprehensive blueprint to kickstart your journey.</p>
<h3 id="heading-what-is-opensource">What is Opensource ?</h3>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0x49nx0cl6n2tvyf34na.png" alt="opensource" class="image--center mx-auto" /></p>
<p>Imagine software as a recipe for a dish. When a chef creates a recipe, they write it in a way other chefs can follow to make the same dish. Now, think of open source as sharing that recipe with everyone. In the world of computers, the software is like a recipe that tells the computer how to do specific tasks. With open source, the software's creators share the instructions (called source code) with everyone. This means anyone can see how the software works, change it to fit their needs, and even share their version with others. So, instead of keeping the recipe a secret, open source lets everyone be part of the cooking process. It's like a big kitchen where chefs (developers) work together, share ideas, and make the recipe better for everyone. This way, the software becomes a collaborative effort, benefiting from the skills and creativity of many people. Examples of open-source software are popular recipes that chefs worldwide contribute to and improve over time.</p>
<p><strong>OSS - Open Source Software</strong></p>
<p>Open Source Software refers to software whose source code is available to the public. Users can view, modify, and distribute the code, fostering collaboration and transparency.</p>
<p><strong>Examples of OSS</strong>: Linux, Firefox, Python, GIMP, LibreOffice etc...</p>
<p><strong>CSS - Closed Source Software</strong></p>
<p>Closed Source Software refers to proprietary software whose source code is not distributed to the public. It is considered proprietary, entrusted, and exclusive, with restrictions on access</p>
<p><strong>Examples of CSS</strong>: Microsoft Windows, Adobe Photoshop, etc...</p>
<h2 id="heading-what-is-github">What is Github?</h2>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kx4ws7nftdys6vl47y9k.png" alt="Github" /></p>
<ul>
<li><p>GitHub is a leading platform for open-source development, facilitating global collaboration on diverse projects. Using Git for version control, enables easy forking, modification, and merging of code.</p>
</li>
<li><p>With a user-friendly interface and features like issue tracking and pull requests, GitHub streamlines collaborative development, promoting transparency and community engagement.</p>
</li>
<li><p>Developers leverage GitHub to share, collaborate, and benefit from the collective expertise of the open-source community, fostering innovation in software projects.</p>
</li>
</ul>
<p><strong>Repository :</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3dxtwmiqbr66ybibth2k.png" alt="Repo" /></p>
<p>A Repository is a folder for storing Documents and application source code. a repository can track and maintain version control</p>
<ol>
<li><p>A repository, or "<strong>repo</strong>" is a central storage for code and files, facilitating collaboration and version control in software development.</p>
</li>
<li><p>Platforms like GitHub offer tools to host and manage repositories, enhancing teamwork and tracking changes in projects.</p>
</li>
</ol>
<h3 id="heading-basic-terminologies-in-opensource"><strong>Basic Terminologies in Opensource:</strong></h3>
<ul>
<li><p><strong>Source Code:</strong> Source code is the human-readable set of instructions and statements written by a programmer using a programming language to create software. It is the original code that is written and edited by developers and is later compiled or interpreted to produce executable programs.</p>
</li>
<li><p><strong>Repository:</strong> Storage space for project files, hosted on platforms like GitHub.</p>
</li>
<li><p><strong>Version Control:</strong> System for tracking changes in source code over time, often using Git.</p>
</li>
<li><p><strong>Fork:</strong> Creating a personal copy of a project for independent modification.</p>
</li>
<li><p><strong>Pull Request (PR):</strong> Proposal to integrate changes from a fork into the original project.</p>
</li>
<li><p><strong>Issue:</strong> Discussion space for tasks, bug reports, and feature requests in a project.</p>
</li>
<li><p><strong>Merge:</strong> Combining changes from one branch or fork into another.</p>
</li>
<li><p><strong>Commit:</strong> Set of changes made to the source code, acting like a snapshot.</p>
</li>
<li><p><strong>Branch:</strong> Separate line of development within a repository.</p>
</li>
<li><p><strong>License:</strong> Legal terms specifying how software can be used, modified, and distributed.</p>
</li>
<li><p><strong>Community:</strong> Group of developers, contributors, and users engaged in an open-source project.</p>
</li>
<li><p><strong>Contributor:</strong> Individual contributing to an open-source project through code, reports, or assistance.</p>
</li>
<li><p><strong>Maintainer:</strong> Person or group overseeing and managing an open source project.</p>
</li>
<li><p><strong>README:</strong> File providing essential information about a project, including usage instructions.</p>
</li>
<li><p><strong>Code of Conduct:</strong> Guidelines on acceptable behavior.</p>
</li>
<li><p><strong>Contributing Guidelines:</strong> describe how to contribute and collaborate.</p>
</li>
<li><p><strong>Upstream:</strong> Forms the Foundation of the source.</p>
</li>
<li><p><strong>Downstream:</strong> Programmers working stream.</p>
</li>
</ul>
<h3 id="heading-ways-to-contribute">Ways to Contribute</h3>
<ul>
<li><p>Code</p>
</li>
<li><p>Documentation</p>
</li>
<li><p>Advocacy(write blogs, share on social media, talks or workshops)</p>
</li>
<li><p>Community</p>
</li>
</ul>
<h3 id="heading-how-to-start-contributing">How to start Contributing:</h3>
<ul>
<li>Search and Work on First good First issues label in issues.</li>
</ul>
<pre><code class="lang-plaintext">Some websites for finding issues:

https://up-for-grabs.net/
https://goodfirstissues.com/
https://goodfirstissue.dev/
https://www.codetriage.com/
</code></pre>
<ul>
<li><p>Attend regular meetings in the Community</p>
</li>
<li><p>Join sprints</p>
</li>
<li><p>Participate in a code Program.</p>
</li>
</ul>
<h3 id="heading-getting-started-as-a-newbie">Getting Started as a Newbie:</h3>
<p>Watch this tutorial from eddiehub to get started on open source: <a target="_blank" href="https://youtu.be/yzeVMecydCE?si=8bilG-FPc6qh_XuO">here</a> Join eddiehub community on open source <a target="_blank" href="https://github.com/EddieHubCommunity">here</a></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gkym7tx32u9wfzi4yaro.png" alt="IEddiehub" class="image--center mx-auto" /></p>
<p>EddieHub is an open-source organization focusing on reciprocal collaboration between members of the tech community; to encourage and promote communication, best practices, and technical expertise in an inclusive and welcoming environment.</p>
<h3 id="heading-why-developers-should-contribute-to-open-source-regardless-of-experience"><strong>Why Developers Should Contribute to Open Source, regardless of Experience?</strong></h3>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/drtj0vm00csjlv9q62wr.gif" alt="why gif" class="image--center mx-auto" /></p>
<p><strong>Skill Enhancement:</strong> Contributing to open-source projects provides developers, regardless of experience, with an opportunity to enhance their skills. It allows them to work on real-world projects, learn from experienced developers, and gain hands-on experience with different technologies and coding practices.</p>
<p><strong>Community Engagement:</strong> Open-source contributions enable developers to engage with a global community of like-minded individuals. This collaborative environment fosters networking, mentorship opportunities, and exposure to diverse perspectives, ultimately enriching a developer's professional growth and expanding their network in the tech industry.</p>
<h3 id="heading-conclusion">Conclusion:</h3>
<p>Embarking on the journey of open source is an invitation to a world of collaboration, innovation, and community-driven development. By understanding the fundamental terminology and concepts, you've taken the first step toward contributing and learning in this dynamic environment.</p>
<p><strong>Connect with me on:</strong> <a target="_blank" href="https://linktr.ee/raghul_m1">https://linktr.ee/raghul_m1</a></p>
]]></content:encoded></item><item><title><![CDATA[Hosting a Portfolio Website in Azure Cloud with Custom Domain(Free)👨‍💻.]]></title><description><![CDATA[We are going to Host a Static Website in Azure Cloud with CI/CD using Github Actions and Free Custom Domain.

So, Basically What is Cloud Computing?
Definition of Cloud Computing According to NIST (National Institute of Standards and Technology-US)
A...]]></description><link>https://blog.raghul.in/hosting-a-portfolio-website-in-azure-cloud-free</link><guid isPermaLink="true">https://blog.raghul.in/hosting-a-portfolio-website-in-azure-cloud-free</guid><category><![CDATA[Azure]]></category><category><![CDATA[portfolio]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Azure Web App]]></category><category><![CDATA[GitHub]]></category><dc:creator><![CDATA[Raghul M]]></dc:creator><pubDate>Sat, 18 Jun 2022 13:23:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/BMnhuwFYr7w/upload/645d8e40e47039f000e64f66f6ce1442.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We are going to Host a Static Website in Azure Cloud with CI/CD using Github Actions and Free Custom Domain.</p>
<p><img src="https://i.gifer.com/2E5l.gif" alt /></p>
<h2 id="heading-so-basically-what-is-cloud-computing">So, Basically What is Cloud Computing?</h2>
<p>Definition of Cloud Computing According to <strong>NIST</strong> (<strong>National Institute of Standards and Technology-US</strong>)</p>
<p>A Model that enable ubiquitious (everywhere) and convenient on-demand network access to a shared pool of configurable computing resources for eg :(networks,storage,applications and services) that can be rapidly provisioned and released with minimal management effort or cloud service provider interaction.</p>
<h2 id="heading-in-layman-term">In Layman Term</h2>
<p>The cloud is the delivery of on-demand computing resources , everything from application to data centers over the internet on a pay for what you use basis On Demand Services are:</p>
<ul>
<li><p>Networks</p>
</li>
<li><p>Servers</p>
</li>
<li><p>Storage</p>
</li>
<li><p>Applications and Servers</p>
</li>
</ul>
<p><strong>What is Azure Cloud??</strong></p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gkzi841ii8pgrobx60s7.jpg" alt="Azure Logo" /></p>
<p>Microsoft Azure, formerly known as Windows Azure, is Microsoft's public cloud computing platform. It provides a range of cloud services, including compute, analytics, storage and networking.The Azure cloud platform is with more than 200 products and cloud services designed to help you bring new solutions to life—to solve today’s challenges and create the future. Build, run and manage applications across multiple clouds, on-premises and at the edge, with the tools and frameworks of your choice.</p>
<h2 id="heading-solets-get-started-with-our-project">So,Lets get Started with Our Project:</h2>
<p><img src="https://i.gifer.com/7RQq.gif" alt /></p>
<pre><code class="lang-plaintext">Prerequisites :
1.Github Account
2.Azure Account
3.Source Code for your Portfolio website 💻
4.Custom Domain (Tip: Free Domain)
</code></pre>
<p><strong>Step 1</strong> : Github Account &amp; Source Code</p>
<p>GitHub is a web-based version-control and collaboration platform for software developers.Owned by Microsoft.</p>
<p>Create a Free Github Account <a target="_blank" href="https://github.com/">Click here</a></p>
<pre><code class="lang-plaintext">Afterwards :
1.Create a New Repository 
2.Upload your Portfolio website sourcecode in the repo
</code></pre>
<p>Example :</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5bim3b8423z3i6pzlhbh.png" alt="Repo sample" /></p>
<p><strong>Step 2</strong> : Azure Account</p>
<p>Azure is a public cloud owned by Microsoft,Azure Account comes with 200$ credits for first month and 40+ free for life time.</p>
<p>Students : <a target="_blank" href="https://azure.microsoft.com/en-in/free/students/">Click here</a> (Credit card Not Required) Others : <a target="_blank" href="https://azure.microsoft.com/en-us/free/search/?ef_id=CjwKCAjwqauVBhBGEiwAXOepkWZhzhkj2d0v4mOMjJbJH9gRaXAds9cb9-MqBmDHCVKfihaAyEtfARoCzcIQAvD_BwE%3AG%3As&amp;OCID=AID2200195_SEM_CjwKCAjwqauVBhBGEiwAXOepkWZhzhkj2d0v4mOMjJbJH9gRaXAds9cb9-MqBmDHCVKfihaAyEtfARoCzcIQAvD_BwE%3AG%3As&amp;gclid=CjwKCAjwqauVBhBGEiwAXOepkWZhzhkj2d0v4mOMjJbJH9gRaXAds9cb9-MqBmDHCVKfihaAyEtfARoCzcIQAvD_BwE">Click here</a> (Credit card Required)</p>
<p><strong>Step 3</strong> : Free Custom Domain</p>
<p>Once you Create a free azure account you're eligible for 25+ free domain</p>
<p>For free Custom domain <a target="_blank" href="https://www.name.com/azure">Click Here</a></p>
<p><strong>Step 4</strong> : Lets Start the Project</p>
<p>Follow this Link to step by step walkthrough of the project and it's documentation :</p>
<p>Project Documentation :<a target="_blank" href="https://github.com/Raghul-M/Azure-Static-Website-Hosting/blob/main/README.md">Github Repo</a></p>
<p>live demo : <a target="_blank" href="https://www.raghulm.me/">raghulm.me (disabled)</a></p>
<h2 id="heading-conclusion">Conclusion</h2>
<blockquote>
<p>the best way to gain knowledge is to make your hands dirty. try new stuff and play with services in the cloud. Side Projects are the best way to gain Experience.</p>
</blockquote>
<p>Thanks for reading this post, Give your Valuable Feedback</p>
<p>Connect with me on :</p>
<p>Twitter : <a target="_blank" href="https://twitter.com/RaghulM01?t=gmRng6YU3iEn1lM5SVw7kA&amp;s=09">@RaghulM01</a></p>
<p>portfolio : <a target="_blank" href="https://raghul-m.github.io/">raghul-m.github.io</a></p>
<p>In Collaboration with <a target="_blank" href="https://thinkdigital.hashnode.dev/">Think Digital , SRM</a></p>
]]></content:encoded></item></channel></rss>