I think ... - computinghttps://blog.kmonsoor.com/2024-01-22T00:00:00+06:00সহজ বাংলায় Scalability এর একটা উদাহরণ2024-01-22T00:00:00+06:002024-01-22T00:00:00+06:00Khaled Monsoortag:blog.kmonsoor.com,2024-01-22:/sohoz-banglay-scalability-example-1-bn/<p>আপনার কোম্পানি যে ইমেইল পাঠাচ্ছে, সেটা কি scalable?</p><p>সিঙ্গাপুরের Grab এ থাকতে আমার একবার বস ছিল অরুণ নামের এক ইন্ডিয়ান।</p>
<p>জয়েনের কিছুদিন পরে, বসের কাছে কমপ্লেন গেলো যে খালেদ এই কাজ টা এমনে করে না, অমনে করে। বস আমাকে উইকলি মিটিং এ জিজ্ঞেস করলো, অমনে করো কেনো? আমি বললাম, এই এই কারণে অমনে করি।
সে আমাকে এক কথায় উত্তর দিলো যেটা আমার কানে এখনও বাজে। সে বলছিল, “তোমার উদ্দেশ্যটা ভালো, কিন্তু এটা scalable না”।
বেশি ব্যাখ্যা করা লাগে নাই, কারণ হোল আমরা দুজনেই জানতাম যে আমার ২৫+ টা টীমের সাথে কাজ করা লাগে, আমি যেভাবে আগাচ্ছিলাম, সেভাবে পাঁচ সাতটা টীমের সাথে হয়তো কাজ ম্যানেজ করা যেতো, কিন্তু ২৫ টা না।</p>
<p>আমাদের দেশের আইটি সেক্টরের বিভিন্ন উদ্যোগেও আমি এই ব্যপারটা খেয়াল করি খুব বেশি রকম, “উদ্দেশ্যটা ভালো, কিন্তু scalable না”
কি দেখে হঠাত এই কথা মনে আসলো সেইটা বলি।</p>
<p>ব্র্যাক ব্যাংক একটা মেইল পাঠিয়েছে, সিস্টেম মেইনটেনেন্সে সার্ভিস কিছুক্ষণ বন্ধ থাকবে এইটা হোল বিষয়। ইমেইলে বডিতে কোন টেক্সট নাই, খালি একটা ইমেজ। ভালো কথা। টেক্সট অনেকে পড়তে চায় না, ইমেজ দিলে খেয়াল করে পরবে। অনেকের মোবাইলে বাংলা ঠিকমতো দেখায় না, সেটাও একটা বিষয়। তো “উদ্দেশ্যটা ভালো”।
কিন্তু সাধারণ এই ইমেজ এর সাইজ ৩ মেগাবাইট (ধরেন, ৩০০০ কিলোবাইট), যেই ইমেজটা ৫০ কিলোবাইট হলেও একই লেখা প্রায় একই রকম দেখা যেতো। পার্থক্যটা খালি চোখে ধরা কঠিন। মানে প্রয়োজনের তুলনায় ৬০ গুন বড়।
এখন আসেন scalability বিবেচনা করি । </p>
<p>ব্র্যাক ব্যাংক এর কাস্টমারের সংখ্যা যদি হয় ৫ লক্ষ, এই একটা মেইল পাঠাতে ব্যাংকের মেইল সার্ভার (edm. bracbank .com) থেকে “Data Out” ট্রাফিক জেনারেট হয়েছে (৩ মেগাবাইট x ৫,০০,০০০) = ১৫০০ গিগাবাইট।
আর ইমেজটার সাইজ যদি হতো ৫০ কিলোবাইট, এই একই কাজটা হয়ে যেতো ২৬ গিগাবাইট মতো ডাটা ট্রাফিকে, মানে ৬০ ভাগের এক ভাগ, আর পুরো ব্যাচ ইমেইল পাঠানো শেষ হতো ৬০ গুন দ্রুত।</p>
<p>এখন বলতে পারেন, ব্যাংকের টাকা আছে খরচ করুক। সেটা এক কথা, কিন্তু আরেকটা সাইড ভাবেন।</p>
<p>বাংলাদেশের বেশির ভাগ মানুষ ইন্টারনেট চালায় মোবাইলে। অর্থাৎ, এই গুরুত্বপূর্ণ ইমেইল টা দেখতে যে ৩ মেগাবাইট বাড়তি খরচটা হোল, ধরেন ১০০০ গিগাবাইট, এর বড় একটা অংশ গেলো মোবাইল কোম্পানিগুলার পকেটে।
আবার, বাংলাদেশের বেশির ভাগ জায়গাতে ইন্টারনেটের যে অবস্থা, এই মেইল লোড হতে সময় নিবে ১০ সেকেন্ড, যেটা কিনা এক সেকেন্ডএর ও কম সময়ে করা যেতো।</p>
<p>এবার ধরেন, ইমেজটার সাইজ হোল ১০ মেগাবাইট, ৩ মেগাবাইটের বদলে। কি সমস্যা?
“scalable না”, এই আরকি।</p>Create a free go-link server “on edge” using Cloudflare Worker KV2021-06-06T00:00:00+06:002021-06-06T00:00:00+06:00Khaled Monsoortag:blog.kmonsoor.com,2021-06-06:/golink-server-using-cloudflare-worker-kv/<p>Among quite a few ways to implement a go-link server (i.e., url-forwarder, short-url server, etc.), I will show how to use free-tier Cloudflare Worker (& <span class="caps">KV</span>) to create an in-house, on-edge, <strong>no-webserver</strong> go-link server.</p><p>Among quite a few ways to implement a go-link server (i.e., url-forwarder, short-url server, etc.), I’m going to show you how to use free-tier Cloudflare Worker (& <span class="caps">KV</span>) to create an in-house, on-edge, <strong>no-webserver</strong> go-link server.</p>
<p>For example, the short-link for this article is <a href="https://go.kmonsoor.com/golink-kv">go.kmonsoor.com/golink-kv</a> </p>
<p><img alt="overall structure" src="https://i.imgur.com/MjIS5gD.png"></p>
<ul>
<li><code>/latest</code> (by which I mean <code>go.yourdomain.co/latest</code>) may point to <code>https://www.yourcompany.com/about/news</code> which is a public page</li>
<li><code>/hr-help</code> may point to <code>https://www.company-internal.com/long-link/hr/contact.html</code>, which is company’s internal human-resources help portal</li>
<li><code>/cnypromo</code> may point to <code>https://shop.yourcompany.com/sales/promotions/?marketing-promo=2021-cny</code> which is a temporary sales promotions page targeting the shoppers during the Chinese new year of 2021.</li>
</ul>
<p>Please note that using the setup and the code below, it’ll be possible to resolve short-links via a <strong>single</strong> sub-domain, e.g., <code>go.your-domain.co</code>. However, it’s possible (with some modification of the code) to resolve/redirect via <em>any number of domains</em> (your own, of course) towards any other public or private <span class="caps">URL</span>, and all sorts of novelties. However, for brevity’s sake, I will discuss the first one, a single sub-domain usecase.</p>
<p>To set up a go-link server or short-<span class="caps">URL</span> resolver via a proper <span class="caps">KV</span>+Worker combination, we’ll go through these steps:</p>
<div class="toc">
<ul>
<li><a href="#pre-requisites">Pre-requisites</a></li>
<li><a href="#create-the-short-link-map-as-a-kv">Create the short-link map as a <span class="caps">KV</span></a></li>
<li><a href="#mapping-a-kv-to-a-worker-variable">Mapping a <span class="caps">KV</span> to a Worker variable</a></li>
<li><a href="#handling-a-route-with-webworker">Handling a route with webworker</a></li>
<li><a href="#create-the-worker">Create the Worker</a></li>
<li><a href="#pointing-a-dns-record-to-the-worker">Pointing a <span class="caps">DNS</span> record to the Worker</a></li>
<li><a href="#next-step">Next step</a></li>
<li><a href="#related">Related</a></li>
</ul>
</div>
<h1 id="pre-requisites">Pre-requisites<a class="headerlink" href="#pre-requisites" title="Permanent link">¶</a></h1>
<ul>
<li>The <span class="caps">DNS</span> resolver for the <strong>root</strong> domain (in the example below, <em><code>kmonsoor.com</code></em>) needs to be Cloudflare. Because the core of the solution, the “worker”, runs on the nearest (from the user) edge of Cloudflare using a standard <span class="caps">KV</span> (“key, value”) list.</li>
<li>Write permission to the <span class="caps">DNS</span> configuration as you’d need to add a new <span class="caps">AAAA</span> <span class="caps">DNS</span> record.</li>
<li>Some knowledge of Javascript(<code>ES6</code>), as we are going to write the “worker” in that language.</li>
</ul>
<h1 id="create-the-short-link-map-as-a-kv">Create the short-link map as a <span class="caps">KV</span><a class="headerlink" href="#create-the-short-link-map-as-a-kv" title="Permanent link">¶</a></h1>
<p>We’ll start the setup by creating the short-link map, the list between the short-link segments that you (or someone in your org) define, and the actual URLs they need to point to.</p>
<p>Find the <span class="caps">KV</span> stuff in the <code>Workers</code> section. From the screenshot, please ignore the “Route” section for now. </p>
<p><img alt="Find the KV stuff in the Workers section" src="https://i.imgur.com/b2Rk45u.png"></p>
<ul>
<li>you’d need to create a Worker <span class="caps">KV</span> “Namespace”. Name the namespace as you seem fit. I named it <code>REDIRECTS</code> (in all caps just as a convention, not required). </li>
<li>List the short links <span class="amp">&</span> their respective target URLs. From the examples in the intro, the keys <code>latest</code>, <code>hr-help</code>, <code>cnypromo</code> etc. would be in as the “key”, and the target full links as the respective “value”.</li>
<li>Remember <span class="caps">NOT</span> to start the short part with ‘/’. It’ll be taken care of in the code.</li>
</ul>
<p><img alt="Create the short-link map as a KV" src="https://i.imgur.com/jkC8bSr.png"></p>
<p>Once you’ve listed all your desired (short-link, target-link) combinations, now we have a <span class="caps">KV</span> on Cloudflare. However, it’s not referencable from your Worker code, not yet. Hence the next step.</p>
<h1 id="mapping-a-kv-to-a-worker-variable">Mapping a <span class="caps">KV</span> to a Worker variable<a class="headerlink" href="#mapping-a-kv-to-a-worker-variable" title="Permanent link">¶</a></h1>
<p>Now, we will map the previously created <span class="caps">KV</span> to a variable that can be referenced from our Worker code. Please note that though I used different names, it can be the same as well. Also, note that multiple Workers can access a single <span class="caps">KV</span>, and vice versa is also true; a single Worker can reference multiple KVs.</p>
<p><img alt="Mapping a KV to a Worker variable" src="https://i.imgur.com/lb7G9si.png"></p>
<h1 id="handling-a-route-with-webworker">Handling a route with webworker<a class="headerlink" href="#handling-a-route-with-webworker" title="Permanent link">¶</a></h1>
<p><img alt="Handling a route with webworker" src="https://i.imgur.com/KohHRfR.png"></p>
<h1 id="create-the-worker">Create the Worker<a class="headerlink" href="#create-the-worker" title="Permanent link">¶</a></h1>
<p>Now, we will write Worker-code that runs on <code>V8</code> runtime on the nearest (from the requesting user) “edge” location of Cloudflare, to execute the code and deliver the result(s) to the user. In this case, that would be to redirect user-requested address to the mapped one (by you, in the <span class="caps">KV</span> namespace above).</p>
<p><img alt="Creating a worker" src="https://i.imgur.com/eNfZNyN.png"></p>
<p>The code editor looks like this: </p>
<p><img alt="The code editor for Cloudflare worker" src="https://i.imgur.com/pb9AE9v.png"></p>
<p>If you rather prefer to copy-paste, please feel free to do it from the below GitHub Gist.</p>
<div class="gist">
<script src="https://gist.github.com/kmonsoor/dc9f96660423c96471f8574ba018d867.js"></script>
</div>
<p>Once done, it should look like …
<img alt="created webworker" src="https://i.imgur.com/XSdKB56.png"></p>
<h1 id="pointing-a-dns-record-to-the-worker">Pointing a <span class="caps">DNS</span> record to the Worker<a class="headerlink" href="#pointing-a-dns-record-to-the-worker" title="Permanent link">¶</a></h1>
<p>Finally, we need to point a <span class="caps">DNS</span> record that’ll redirect all requests to your re-soutign sub-domain (e.g. <code>go.your-domain.com</code>) to the Cloudflare Worker that we just created.</p>
<p>According to the Cloudflare docs, the <span class="caps">DNNS</span> record must be an <span class="caps">AAAA</span> record, pointing to the IPv6 address <code>100::</code>. The “Name” here is the “sub-domain” part of your choice, which is better be short, to rightfully serve our goal here. </p>
<p><img alt="Pointing a DNS record to it" src="https://i.imgur.com/62bk7pe.png"></p>
<p>Voila ! Now, test some of the short-urls that you’ve mapped via the <span class="caps">KV</span>. Enjoy !
Watch out for the target usage though. <a href="https://developers.cloudflare.com/workers/platform/limits#worker-limits">Here’s the limit</a>. </p>
<p>I think you’ll be fine, unless you’re some celebrity ;)</p>
<h1 id="next-step">Next step<a class="headerlink" href="#next-step" title="Permanent link">¶</a></h1>
<p>As the next step, I’m thinking to create a generic <code>Go/Link</code> resolver browser extension. Then, someone can set their own default domain or company domain of choice as short-domain host. In that case, entering just <code>go/hr-help</code> on the browser will take to <code>https://www.company-internal.com/.../hr/contact.html</code> that we have discussed at the beginning (remember the example case of an internal human resources help portal?).</p>
<h1 id="related">Related<a class="headerlink" href="#related" title="Permanent link">¶</a></h1>
<p>If you want to do this url-direction <strong>on your server, but only using webserver</strong>, try this: <a href="https://go.kmonsoor.com/golink-caddy">Personal short-link server using only Caddyserver</a></p>
<hr>
<p>If you find this post helpful, you can show your support <a href="https://www.patreon.com/kmonsoor">through Patreon</a> or by <a href="https://ko-fi.com/kmonsoor">buying me a coffee</a>. <em>Thanks!</em></p>TL;DR what cloud provider to use in 20212021-05-22T00:00:00+06:002021-05-22T00:00:00+06:00Khaled Monsoortag:blog.kmonsoor.com,2021-05-22:/TLDR-what-cloud-to-use-2021/<p>Among the thousands of combinations a company can take to choose from the cloud providers and their products, this is my <span class="caps">TL</span>;<span class="caps">DR</span> suggestion</p><p>The sheer number of combinations a company can choose from the cloud providers and their product suites is mind-boggling. Hence, I decided to break it down in a concise form for the busy C-suite executives.</p>
<p>According to my little experiences and humble opinion, I suggest …</p>
<p>➤ If your company is a small SaaS shop with 10-ish engineers, stick with DigitalOcean, Linode, <span class="caps">OVH</span>, etc., which are best known as cloud “instance” providers.<br>
Think McDonald’s; reliable, cheapest, fast, but you won’t take your date there. <br>
<strong>Budget</strong>: 💰</p>
<p>➤ If you want a whole cloud experience (e.g., <span class="caps">VPC</span>, firewall, <span class="caps">WAF</span>, etc., on the menu), start with Google Cloud, then try <span class="caps">AWS</span> later.<br>
Google Cloud would be the quickest to grasp the cloud concepts and get going. The <span class="caps">UI</span> of the <span class="caps">AWS</span> console is a bit messy compared to <span class="caps">GCP</span>; it just takes more time to get a proper grip.<br>
Imagine them as full-course, Michelin-star restaurants. However, the product names are so abstract that they need a full-sized chart for that. ;)
<strong>Budget</strong>: 💰💰💰</p>
<p>➤ Are you planning to set up a million-dollar infra for a billion-dollar company? Go for some <span class="caps">GCP</span>+<span class="caps">AWS</span> multi-cloud setup. You gonna get rebates from both on the scale of hundreds of thousands of dollars. And Microsoft Azure gonna offer you some million-$ free-tier, hoping to get the company hooked on Azure. :D
<strong>Budget</strong>: 💰💰</p>
<p>➤ On the other hand, if you run a govt agency or a company where wearing suits is the mainstream, Microsoft Azure is your best bet.<br>
A bunch of consultancy companies to choose from; you need to just approve the budget, you get the things to get up <span class="amp">&</span> running but miss the deadline by months, if not years. But there’d be no need for hiring more smarter ppl than that you already have.
<strong>Budget</strong>: 💰💰💸</p>
<p>Need an even more comprehensive guide? Gotcha, fam …</p>
<blockquote class="twitter-tweet"><p lang="en" dir="ltr"><span class="caps">CTO</span>: we're having hard time choosing a cloud provider<br>…<br>“say no more, fam, I gotcha …” <a href="https://t.co/hR3rMruWWi">pic.twitter.com/hR3rMruWWi</a></p>— Khaled Monsoor ✨ (@kmonsoor) <a href="https://twitter.com/kmonsoor/status/1395959443376857088?ref_src=twsrc%5Etfw">May 22, 2021</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
<p><em><span class="caps">PS</span></em> This post is inspired by a LinkedIn post of mine where I shared about my short experience with the Microsoft Azure <strong>DevOps</strong> suite</p>HA(High-Availability) Setup for InfluxDB2018-01-18T00:00:00+06:002018-01-18T00:00:00+06:00Khaled Monsoortag:blog.kmonsoor.com,2018-01-18:/ha-setup-for-influxdb/<p>Create a robust, highly-available, time-series InfluxDB cluster with the community(free) version of it</p><p><strong><span class="caps">NOTE</span></strong>
<em>Since I have written this article, all the components used in this below architecture have gone through many updates and releases. While the general premise involving <code>influxdb-relay</code> and the multiplexing might still hold, please sync up with the latest release docs before jumping into some serious system design.</em></p>
<hr>
<p>Currently, from version 0.9, you cannot create an InfluxDB cluster from the open-sourced free edition. Only commercially available InfluxDB Enterprise can do that for now. That stirred up the early-adopter enthusiast users, especially for their usage in professional setups. They complained that InfluxData, the company behind InfluxDB, is trying to milk the <span class="caps">OSS</span> solution for profit.</p>
<p><img alt="Archiving isn't easy ... tobias-fischer-PkbZahEG2Ng" src="https://i.imgur.com/0IdYOYnl.jpg"></p>
<p>I can’t blame the InfluxData guys much, as they got to pay their bills too. So far, we — the users of open-source systems — couldn’t show much promise about the financial realities of the projects. Continuing development of <span class="caps">OSS</span> products, by only depending on donations, patrons, or enterprise sponsorship, is far too rare and unpredictable, even for the projects that many successful organizations heavily rely on.</p>
<p>Anyways, InfluxDB then promised and later introduced <code>Influx Relay</code> as a complimentary consolation for missing <span class="caps">HA</span> parts of InfluxDB. You can get the details here and here about that. </p>
<h2 id="premise">Premise<a class="headerlink" href="#premise" title="Permanent link">¶</a></h2>
<p>For my needs, I have to try to create a reliable <span class="caps">HA</span>(High-Availability) setup from available free options, hence InfluxDB and the relay. It’s quite a bit far from an InfluxDB-cluster in terms of robustness or ease of setup, but it’s got the job done, at least for me.</p>
<p>I needed a setup to receive system-stats from at least 500+ instances and to store them for a while, but without breaking the bank in bills from <span class="caps">AWS</span>. Meaning, I could ask for and could use only couple of instances for my solution.</p>
<p>Here were my trade-offs.</p>
<ul>
<li>Not too many instances for this purpose. Neither, any of the heavyweight lifters e.g. <span class="caps">AWS</span>’ m3-xlarge etc. To use only what’s necessary. </li>
<li>To satisfy the budget, hence avoiding pay-per-use solutions as far as it is possible.</li>
<li>Solutions must not be crazy complex, so that handover to the DevOps team be smooth.</li>
<li>Reading the data would be too rarely w.r.t. writing. The related Grafana dashboards will be only used to investigate issues by a handful of people.</li>
</ul>
<h2 id="overall-design">Overall Design<a class="headerlink" href="#overall-design" title="Permanent link">¶</a></h2>
<h3 id="write">Write<a class="headerlink" href="#write" title="Permanent link">¶</a></h3>
<p>From a birds’ eye view, I decided to use two server instances to run parallelly, hosting InfluxDB on them independently and then sending the same data over to them for storing. This scheme mostly looks like <a href="https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_1"><span class="caps">RAID</span>-1 systems</a>.</p>
<p><img alt="Overall architecture" src="https://i.imgur.com/ZKYIyOd.png"></p>
<p>That brings up a couple of challenges.</p>
<ul>
<li>
<p>None of the agents I used on the sender side could multiplex output. That means, they were able to send data to a single destination, not multiple.
On the Windows front, I’ve used <code>Telegraf</code> which is able randomly to switch between pre-listed destinations, but <span class="caps">NOT</span> multiple at-once.<br>
In the case of Linux hosts, I used <code>Netdata</code> which is excellent in its own right, but unable to send stats to multiple destinations.<br>
Here comes <code>Influx-relay</code>. It can receive time-series data-stream from hosts on a <span class="caps">TCP</span> or <span class="caps">UDP</span> port, buffer for a while, and then re-send those received and buffered data to multiple receive ends which can either be an InfluxDB instance or another listening Influx-relay instances.<br>
This chaining can broaden the relaying scheme even further. However, for my purpose, this relay-chaining was not necessary. Rather, from the relay, I am sending data to the separate InfluxDB instances, running on two separate instances. </p>
</li>
<li>
<p>Now that I partially multiplexed the output, my hosts (senders) still are able to send to one destination. So, I need a proxy as well as a load-balancer. For a while, I was torn between <span class="caps">NGINX</span> and HAProxy. Both were new to me. </p>
</li>
</ul>
<p>However, for a couple of reasons, I went for HAProxy. Firstly, I don’t need <span class="caps">HTTP</span> session management. Secondly, as I wanted to keep my <span class="caps">UDP</span> for later, HAProxy was perfectly capable of that.<br>
<span class="caps">NGINX</span> has the support recently, but the maturity was a concern. Also, configuring <span class="caps">NGINX</span> seems a little intimidating (which I know might not be so true). Last but not least, and for what it’s worth, out-of-the-box, HAProxy’s stat page carries much more in-depth information than that of free-version of <span class="caps">NGINX</span>.<br>
Upon receiving the stats stream, HAProxy was supposed to send that to different Influx-relays in a load-balanced fashion.</p>
<p>So, here’s my rough plan. </p>
<p>collector-agent → HAProxy → (50/50 load-balanced) → Influx-relay → (multiplexed) → 2 InfluxDB instances</p>
<p>Now, each one of the received data is to go to both of the InfluxDB instances, or at least to one in case of failure (or, overload per se) of any the relays or Influx instances.
Also, I have chosen to keep Influx-relays deployed as Dockerized and kept HAProxy and InfluxDB instances running as native services. Of course, you can Dockerize HAProxy and InfluxDB, too. </p>
<h3 id="read">Read<a class="headerlink" href="#read" title="Permanent link">¶</a></h3>
<p>As I’ve already noted in the section that reading the data, meaning to fetch data to visualize on Grafana end, will happen rarely and sporadically; only to investigate alarms or any other client-side performance issues. </p>
<p>So, the read requests, reaching the HAProxy end, needed not much routing, other than directly to InfluxDB itself. Still, to better distribute the load I decided to load-balance it 50/50 basis.</p>
<h3 id="ports">Ports<a class="headerlink" href="#ports" title="Permanent link">¶</a></h3>
<ul>
<li>As all the <span class="caps">READ</span> requests are routed through <code>HAProxy</code> running on each of the instances, to the external world only HAProxy’s port should be opened for this purpose. </li>
<li>On the other hand, for <span class="caps">WRITE</span> requests, InfluxDBs are receiving data from relays, one of its own instance and another one on other instance, so InfluxDB should listen on its own port for <span class="caps">WRITE</span> requests only. But, this must be accessible only from own <span class="caps">VPS</span> zone, but not open to the outside world.</li>
<li>In case of HAProxy as well as InfluxDB, you can use the default ports, obviously, which is 8086 <span class="amp">&</span> 8088 respectively. Or, you can choose to go for other ports (security through obfuscation). Your call. In this writing, I’ll go with the defaults.</li>
</ul>
<h3 id="authentication-ssl">Authentication, <span class="caps">SSL</span><a class="headerlink" href="#authentication-ssl" title="Permanent link">¶</a></h3>
<p>You can configure <span class="caps">SSL</span> with your own server certificates through the HAProxy configs. You can even go for <span class="caps">SSL</span> from the relays to InfluxDB writes. If your sender hosts are connecting to your HAProxy through public internet, you should at least go for password-based authentication, better to utilize <span class="caps">SSL</span>. However, for brevity’s sake, I’ll skip them in this post.</p>
<p>**Note: *
Please bear in mind, this is an “in-progress” post; prematurely published to force me to work on it. I have the plan to add all the necessary configurations <span class="amp">&</span> commands, that I used, here.</p>Characters Efficient Flow in Designing Bangla Font2013-08-22T00:00:00+06:002013-08-22T00:00:00+06:00Khaled Monsoortag:blog.kmonsoor.com,2013-08-22:/bangla-font-design-characters-flow/<p>How to efficiently progress in designing font through Bangla characters.</p><h2 id="optimized-workflow-for-designing-basic-characters-in-bangla-bengali-script">Optimized workflow for designing basic characters in Bangla / Bengali script<a class="headerlink" href="#optimized-workflow-for-designing-basic-characters-in-bangla-bengali-script" title="Permanent link">¶</a></h2>
<p><img alt="Bangla font design flow" src="http://i.imgur.com/y9Av5yN.png"></p>