<?xml version="1.0" encoding="UTF-8"?>
<rss version='2.0' xmlns:dc="http://purl.org/dc/elements/1.1/"
  xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Bekir Dogan</title>
    <description>A DevOps guy with peculiar aura and an inappeasable appetite for all wonderful niche technologies</description>
    <link>https://bergerx.silvrback.com/feed</link>
    <atom:link href="https://bergerx.silvrback.com/feed" rel="self" type="application/rss+xml"/>
    <category domain="bergerx.silvrback.com">Content Management/Blog</category>
    <language>en-us</language>
      <pubDate>Mon, 06 Mar 2017 12:54:59 -1100</pubDate>
    <managingEditor>bekirdo@gmail.com (Bekir Dogan)</managingEditor>
      <item>
        <guid>http://blog.bdgn.net/why-i-don-t-like-azure#30322</guid>
          <pubDate>Mon, 06 Mar 2017 12:54:59 -1100</pubDate>
        <link>http://blog.bdgn.net/why-i-don-t-like-azure</link>
        <title>Why I didn&#39;t like Azure!</title>
        <description></description>
        <content:encoded><![CDATA[<p>~8 months ago I spent couple of months trying to deal with Azure. Recently I&#39;ve been asked about Azure a couple of times and tried to put together why I feel like Azure is not suitable for using as part of automation I used to work on.<br>
Be warned that most things written here are my personal thoughts and already quite old.<strong>I got several feedbacks after this post, see the <a href="https://disqus.com/embed/comments/?base=default&version=60c69418f14a8b7401cd956e1062204c&f=bdgn&t_i=silvrback-bergerx-30322&t_u=http%3A%2F%2Fblog.bdgn.net%2Fwhy-i-don-t-like-azure&t_d=Why%20I%20didn%27t%20like%20Azure!&t_t=Why%20I%20didn%27t%20like%20Azure!&s_o=default#">blog post comments</a> and <a href="https://www.reddit.com/r/AZURE/comments/5xzmnw/why_i_didnt_like_azure/deor0et/">reddit comments</a>.</strong></p>

<p>As of the day I was dealing with Azure, the service had several bugs have not been not fixed for years. They instead released long documents for their clients to understand and to help properly workaround. Here is a quote from Azure&#39;s own <a href="https://docs.microsoft.com/en-us/azure/cloud-services/cloud-services-allocation-failures">cloud-services-allocation-failures</a> page:</p>

<blockquote>
<p>You may occasionally receive errors when performing these operations even before you reach the Azure subscription limits. This article explains the causes of some of the common allocation failures and suggests possible remediation. The information may also be useful when you plan the deployment of your services.</p>
</blockquote>

<p>This one is simply telling me that trying to create a VM sometimes starts returning a non-temporary error, effectively means that you can&#39;t use the offering very main feature, create a new VM, unless you work around the pretty well documented bug. This error is non-deterministic unless you know about Azure internals, which is only documented in the troubleshoot document I linked above. </p>

<p>One of my old colleagues said this after seeing the same pattern several times: &quot;Is this the Azure way? Write a very detailed document for a well known bug to let users to deal with it, instead of fixing the product.&quot;</p>

<p>Here let me tell you about what I experienced differently from AWS.</p>

<p><strong>Unnecessarily complex API design and flows.</strong><br>
As an example, just to be able to attach a disk to a VM, one have to deal with lots of confusing terms and their Azure specific meanings: Blob (Block Blob, Page Blob), AFS, VM disk (a special kind of blob), VHD, LUN number (why do I have to care about LUN numbers?)...<br>
Or lets take creating a VM, you first have to understand what Cloud Services and deployments are then you can create VMs, even if there are some cli tools that can create these for you, as soon as you start implementing some automation, you immediately find yourself in a position that you have to understand all these. I also remember my first time trying to deal with AWS, it felt like massive but I did&#39;t felt that desperate when I was trying to understand any AWS service for the first time. I was really frustrated when I tried to understand Azure terminology for the first time, mainly because duplicate/redundant API versions and portals I mentioned below.</p>

<p><strong>API was not stable enough</strong> (it was already a 5+ year old API, since 2010), for example when I deleted a VM, even after the delete VM event reported as completed, which was taking a couple of minutes to be reported as success or failure. I can&#39;t delete the related disks for the next extra 10 minutes. This was not the only case of it&#39;s own, we hit similar problems like this one repeatedly. These cases were not documented clearly and after hitting several issues like this, our infrastructure automation was started to be polluted by this unnecessary but unavoidable checks and sleep&amp;retry logic in many placed. This was also making debugging and troubleshooting harder in a fast paced infra environment, especially if you have many VMs being created and destroyed on the fly for different batch jobs.</p>

<p><strong>Non deterministic API return codes.</strong> Some Azure API calls randomly returns 500, which is also a case for most AWS services, so not a big deal. But AWS explicitly documents these cases and covers these along with necessary retry, back-off and throttle handling mechanisms in the provided SDKs/libraries, and this is also why they always suggest to use the provided SDKs instead of implementing API clients yourself. I didn&#39;t see similar mechanism applied many places in Azure&#39;s Python SDKs (which was the one we were able to find for our automation). Our infrastructure automation code ended up being a spaghetti full of back-off and retry, logic everywhere after some stage. Since we were expecting these problems to be addressed in the provided libraries, we implemented such logic here and there, after some stage we found ourselves with a depth to be paid by us in our source code. This also could be our mistake not to study the provided libraries.</p>

<p>I feel like Azure API was not designed with overall architectural needs in mind. We were trying to mount external block volumes and were not able to find a proper storage driver like <a href="https://github.com/CatalystCode/azure-flocker-driver">flocker</a> or <a href="https://github.com/codedellemc/libstorage/pull/372">REX-Ray</a>. Both were/are just running on AWS without issues, but not able to find any similar upstream tools provide similar functionality, thinking about I&#39;m doing something wrong. I spent some time trying to find and then understand if we can implement it ourselves for our use only as a quick solution. It turned out to be buggy implementation of Azure platform, that you have to explicitly manage the order of mounted/unmounted devices. The API asks for LUN number, which effectively force one to unmount the devices in the same order they are mounted. There were some other issues in the API flow prevents this mount-the-related-blockdevice-to-a-vm-when-needed pattern to be implemented which I can&#39;t remember now on top of my head. I don&#39;t know if this is still the case, but it was around 8 moths ago.</p>

<p>Other one is that you can&#39;t start several machines in the same deployment. This may be a bug in the client library implementation we use (which was also provided by Azure) but we were forced to start nodes one by one in our automation. This can still be worked around but one have to design the whole automation layer with this Azure specific not-so-clearly-documented integration limits in mind. This may be my mistake to start implementing some automation without fully understanding the underlying systems but finding this kind of limits in the middle of implementation is not fun.</p>

<p>Other example problem was some nodes got missing suddenly(!), they never came back. We were explained by Azure support team that it &quot;could&quot; happen if we create more than 40 volumes in a same volume set (blob). But each nodes root directories are also blobs?!?!?!?! This effectively forced us to manually create/manage many blobs per fixed amount of nodes.</p>

<p><strong>Hard to understand API(s).</strong> 2 versions, <strong>Azure Classic vs. Resource Manager</strong>. There are also 2 different portal web interfaces, <strong>Classic and New Azure portal</strong>. Both portals were not feature complete for both API versions, some features can be found on one portal cannot be found on the other, forces users to use both. This also makes understanding examples, related blog posts and documentation too hard to understand. For each resource you find you have to try to understand which version of the API is the writer is using. This is usually a time consuming guesswork since this is not obvious, you have to scan the document try to find clues to identify which one is that for.</p>

<p><strong>Microsoft only tools!!!</strong> Most functionality are implemented only in powershell cli (I guess in some .NET specific technologies). We also found Python and nodejs clients, but for most functionality in their API, neither has many functionality we needed. This lack of implementation is not only in cli implementation, they are also missing in the corresponding libraries. You&#39;ll &quot;have to&quot; use powershell and to automate advanced stuff if you are not willing to add new functionality to the provided libraries and their cli tools and start maintaining your fork. I feel like <strong>most stuff not in the marketing materials or getting started tutorials, you&#39;ll likely forced to use Microsoft specific tools to automate.</strong></p>

<p>Another example problem I experinced was on network layer. We started to migrate our infrastructure by setting up a site-to-site VPN connection between Azure and AWS. After realizing that we can&#39;t use custom routing to point a node, so this blocked us to use solutions like openswan/strongswan on a local node in a subnet. We were limited to integrations Azure provides. The documentation about VPN integrations were not clear, we decided to try our chance. It also turned out to be that after doing some network definitions, we have to download an XML modify it and upload to be able to do what we need. And this can&#39;t be automated without powershell, not from any non-powershell. This functionality was also missing in both Portals, so we can&#39;t manually do that, and also missing in Azure provided python SDK or nodejs azure cli tools. We thought that we can add this functionality to python libraries we use, we ended up not doing so because we were not able to find a sufficient documentation for corresponding API calls. A simple one-week task turned out to be a months-long hard to maintain solution, gave us headaches for each modification.</p>
]]></content:encoded>
      </item>
      <item>
        <guid>http://blog.bdgn.net/new-in-openshift-beware#30242</guid>
          <pubDate>Sun, 26 Feb 2017 11:48:24 -1100</pubDate>
        <link>http://blog.bdgn.net/new-in-openshift-beware</link>
        <title>New in OpenShift! Beware the wrong versions and documentation!</title>
        <description>Here is how to avoid some traps when starting with OpenShift, or &quot;OpenShift for Kubernetes users&quot;.</description>
        <content:encoded><![CDATA[<h1 id="tl-dr">TL;DR</h1>

<ul>
<li><strong>Always use the correct OpenShift documentation</strong> (<a href="https://docs.openshift.com/container-platform/3.4/welcome/index.html">3.4 is currently the latest release</a>d version) and you can also refer to <a href="https://kubernetes.io/docs/">Kubernetes documentation</a> but Kubernetes documentation is not versioned,</li>
<li><strong><code>oc</code> cli tool, is mostly a wrapper around <code>kubectl</code></strong>, so you can refer to better documented <a href="https://kubernetes.io/docs/user-guide/kubectl/"><code>kubectl</code> documentation</a> for managing Kubernetes resources,</li>
<li><strong>Always use the same <code>oc</code> cli version  with your OpenShift cluster</strong>,or prepare to deal with broken cluster components like registry, router, etc.,</li>
<li><strong>Don’t use the documentation pages google find’s for you</strong>, google nearly always redirects you to old versions’ documentation, always check URL google takes you to, if you familiarize yourself with the different documentation URLs :(, you can do the trick yourself: <code>..shift.com/container-platform/3.4/…</code>,</li>
<li><strong>Don&#39;t expect all Kubernetes features in corresponding OpenShift version</strong>, <a href="https://github.com/openshift/origin#support-for-kubernetes-alpha-features">they selectively enable alpha features</a>,</li>
<li><strong>Don&#39;t expect to find a list about how/where OpenShift differentiates from Kubernetes</strong> (y u do this RedHat !?!?!)</li>
</ul>

<hr>

<h1 id="some-background-and-how-we-came-here">Some background and how we came here</h1>

<p>So, as I can follow it, here is a summary about OpenShift&#39;s history&#39;s impact on their documentation and open source branches. This could also tell about the current state of their documentation.</p>

<h2 id="pre-kubernetes-era-openshift-v2">Pre-Kubernetes era (OpenShift v2)</h2>

<p>OpenShift started with their in-house PaaS implementation:</p>

<ul>
<li>This was OpenShift v2, so don’t consider any v2 documentation for Kubernetes related implementation, google searches could end you up on documentation for this old different product,</li>
<li>“Origin” name was also used as project name for v2, don&#39;t assume it&#39;s OpenShift v3 only,</li>
<li>If you see “rhc” tool, “gear”, “cartridge” or “broker” terms in any documentation it&#39;s likely OpenShift v2, not something you want,</li>
</ul>

<h2 id="re-implementation-with-kubernetes-openshift-v3">Re-implementation with Kubernetes (OpenShift v3)</h2>

<p>After some stage, <a href="https://blog.openshift.com/red-hat-chose-kubernetes-openshift/">OpenShift decided to switch to Kubernetes</a>, this is when they start calling it OpenShift v3.</p>

<p>When the switch to Kubernetes happened, they had to bump the major version of the product, and start using 3 as major version, therefore we have OpenShift v3. But while doing the transition, they decided to keep the major number in sync with Kubernetes for the open source version(Origin). </p>

<p>This created diversion between major versions and here is a <strong>mapping between OpenSource and Enterprise versions</strong>:</p>

<ul>
<li><strong><a href="https://docs.openshift.com/enterprise/3.1/welcome/index.html">OpenShift Enterprise 3.1</a> == Origin 1.1</strong> (based on <strong>Kubernetes 1.1</strong>, minor numbers doesn’t match)</li>
<li><strong><a href="https://docs.openshift.com/enterprise/3.2/welcome/index.html">OpenShift Enterprise 3.2</a> == Origin 1.2</strong> (based on <strong>Kubernetes 1.2</strong>, minor numbers doesn’t match)</li>
</ul>

<h2 id="enterprise-old-container-platform-new">Enterprise (old) == Container Platform (new)</h2>

<p>Recently they decided to re-brand the “Enterprise” version as “Container Platform”, so we ended up with this new mapping, they also updated the doc URLs accordingly:</p>

<ul>
<li><strong><a href="https://docs.openshift.com/container-platform/3.3/welcome/index.html">OpenShift Container Platform 3.3</a> == Origin 1.3</strong> (based on <strong>Kubernetes 1.3</strong>, minor numbers doesn’t match)</li>
<li><strong><a href="https://docs.openshift.com/container-platform/3.4/welcome/index.html">OpenShift Container Platform 3.4</a> == Origin 1.4</strong> (based on <strong>Kubernetes 1.4</strong>, minor numbers doesn’t match)</li>
</ul>

<h2 id="open-source-version-origin-documentation-is-not-versioned-use-enterprise-documentation">Open source version (Origin) documentation is not versioned, use enterprise documentation</h2>

<p><a href="https://docs.openshift.org/latest/welcome/index.html">Open source version’s documentation</a> is never versioned as I know, and has always been pointing to the head of development tree hence always “latest” in the URL. OpenShift Origin Latest, be careful about this one because this points to latest branch in their repo (currently unreleased 1.5). Also don&#39;t expect it to be fully up-to-date for the upcoming release.</p>

<p>So for open source versions (Origin 1.2, 1.3, 1.4) since you don&#39;t have versioned documentation, you can <strong>use the corresponding Enterprise version&#39;s documentation</strong>. Some feature&#39;s may differ between enterprise and open source versions, mostly on installation steps and the load-balancer for cluster masters, but the enterprise version&#39;s documentation can still help a lot.</p>

<p>Similar to documentation when referring to source code and ansible scripts to maintain cluster <strong>always stick to the branches for your version</strong>:</p>

<ul>
<li><a href="https://github.com/openshift/origin/tree/release-1.4">https://github.com/openshift/origin/tree/release-1.4</a>, not <a href="https://github.com/openshift/origin">https://github.com/openshift/origin</a></li>
<li><a href="https://github.com/openshift/openshift-ansible/tree/release-1.4">https://github.com/openshift/openshift-ansible/tree/release-1.4</a> (if you followed advanced installation)</li>
</ul>

<h2 id="list-of-documentation-urls-for-different-releases">List of documentation URLs for different releases</h2>

<p>The open source and enterprise documentation “domains” are as well different, it&#39;s important to get familiar with the URL templates they use if you want to start with open source version:</p>

<ul>
<li><a href="https://docs.openshift.org/latest/welcome/index.html">https://docs.openshift.org/latest/welcome/index.html</a> (points latest in development)</li>
<li><a href="https://docs.openshift.com/container-platform/3.4/welcome/index.html">https://docs.openshift.com/container-platform/3.4/welcome/index.html</a> (current last release)</li>
<li><a href="https://docs.openshift.com/container-platform/3.3/welcome/index.html">https://docs.openshift.com/container-platform/3.3/welcome/index.html</a></li>
<li><a href="https://docs.openshift.com/enterprise/3.2/welcome/index.html">https://docs.openshift.com/enterprise/3.2/welcome/index.html</a></li>
<li><a href="https://docs.openshift.com/enterprise/3.1/welcome/index.html">https://docs.openshift.com/enterprise/3.1/welcome/index.html</a></li>
<li><a href="https://developers.openshift.com/">https://developers.openshift.com/</a> (OpenShift v2, DON&#39;T USE ANYTHING UNDER THIS DOMAIN)</li>
</ul>

<hr>

<h1 id="openshift-v-s-kubernetes-cli-and-docs">OpenShift v.s. Kubernetes, cli and docs</h1>

<ul>
<li>OpenShift’s <strong><code>oc</code> tool is based on the <code>kubectl</code> codebase</strong> (I guess but didn&#39;t check), more like a wrapper around the same source code with OpenShift awareness and some extended capabilities,</li>
<li>OpenShift master nodes come with <strong><a href="https://github.com/openshift/openshift-ansible/blob/release-1.4/roles/openshift_cli/library/openshift_container_binary_sync.py#L78"><code>kubectl</code>soft-linked to <code>oc</code></a></strong>, you can have an idea about compatibility,</li>
<li>OpenShift has an additional cli tool, called “oadm”. This has been integrated into “oc” tool as “oc adm” sub-command, so you can <strong>use <code>oc adm</code> in your laptop instead of trying to use <code>oadm</code> in masters</strong>, but to be able to use “oc adm” you should be using it with <a href="https://docs.openshift.com/container-platform/3.4/admin_solutions/user_role_mgmt.html#creating-a-cluster-administrator">a user with “cluster-admin” role</a>,</li>
<li>All “oc”, “oadm”, “kubectl” commands store the credentials and context in “~/.kube” directory, and commands like “{kubectl,oc} config” mutates that file to set/keep the context ,<br></li>
<li>“kubectl” documentation can be found <a href="https://kubernetes.io/docs/user-guide/kubectl/">here</a> and you an refer to it for most “oc” sub-commands, this <a href="https://kubernetes.io/docs/user-guide/kubectl-cheatsheet/"><code>kubectl</code> cheatsheet</a> helps a lot with getting familiar with “kubectl”.</li>
</ul>

<hr>

<h1 id="use-correct-oc-version">Use correct “oc” version</h1>

<p>If you wan&#39;t to manage the cluster components from your laptop, you “have to” use the same <code>oc</code> client with your OpenShift cluster version. So <strong>this is must: “oc.client.version == openshift.cluster.version”</strong>, you can check both versions with <code>oc version</code> command.</p>

<p>E.g. If you use a more recent version of “oc” than your cluster, and want to install a new router, it&#39;ll likely fail. This is mostly because the “oc” cli has the logic for how to deploy a router, “oc” tool creates some Kubernetes resources in the cluster but if the cli tool does not match the cluster version, it&#39;s likely that the resources generated by newer cli version won&#39;t match the configuration of a recent OpenShift version&#39;s requirements.</p>

<p>If you used <code>oc cluster up</code> it will up a cluster with the same version with your “oc” client, so if you have old “oc” version, you&#39;ll likely get an old OpenShift cluster.</p>
]]></content:encoded>
      </item>
      <item>
        <guid>http://blog.bdgn.net/ireland-devops-cv-101#29760</guid>
          <pubDate>Tue, 31 Jan 2017 16:17:24 -1100</pubDate>
        <link>http://blog.bdgn.net/ireland-devops-cv-101</link>
        <title>Ireland DevOps CV writing 101</title>
        <description>My take on how a CV should look in Ireland if you are a DevOps with some experience. (May also be applicable to Developers.)</description>
        <content:encoded><![CDATA[<p>I moved to Dublin from Turkey, and after than I had chance to refer some friends to companies I worked, or helped some people want to move into countries in Europe by finding an IT job. This is mostly relevant for guys with some experience.</p>

<p>Here is my take on how your CV should be prepared if you want to move to Ireland by finding a sponsor to your working Visa. I actually wrote this as a checklist for some of my friends after seeing some common very basic mistakes in a couple of CVs.</p>

<h1 id="people-wont-read-your-cv-but-they-will-scan">People won&#39;t read your CV, but they will scan</h1>

<p>People looking to hire people are usually busy and they don&#39;t have time carefully read your CV. Nobody reads CVs in detail, <strong>they just scan your CV</strong>, and scan fast. You usually <strong>have ~30 seconds</strong> to influence them to call you for the interview.</p>

<p>People evaluating your CV, have a role in their mind and trying to find if you are a good match for it. And the most useful way for this is trying to match keywords. Keywords are relatively easy to scan. Keyword can be a particular technology name, a methodology followed or a popular buzzword. So spread them into your CV properly. Mentioning keywords under the experience section could do the job well. People will <strong>stop scanning and check around it when they see a matching keyword</strong> to see if that is a real match. This is when most people actually read your CV. You can help people to scan your CV by making some words bold to draw their attention.</p>

<p>You have to make your CV extremely easy to scan, they should see keywords or particular phrases immediately. Try putting emphasis on some keywords or challenges you faced by making some words bold. Or you can also <strong>put your own challenges as your own projects</strong> (e.g. Built CI/CD Pipeline, Implemented Continuous Deployment Automation, Created Monitoring And Logging Dashboards). This could also make the reviewers life a lot easier.</p>

<h1 id="people-will-expect-to-see-your-most-fresh-knowledge-or-experience-first">People will expect to see your most fresh knowledge or experience first</h1>

<p>They will expect to see your <strong>recent experience and up-to-date status of knowledge first</strong>. So move your recent hot topics to the top as much as you can. They can be in any form,as a career summary, as your experiences, a keyword list or your areas of interest. I suggest to put a career summary since this is your chance to tell about your profession and experience in relation to your job target.</p>

<p><strong>As people progress into the following pages, they think the content will start to get less relevant to your recent expertise</strong>. And whatever written after the second page could simply be ignored as your &quot;archaic&quot;experience. I&#39;m not saying that your CV has to be a few pages, but be careful about what you put into first few pages.</p>

<p>Putting your full address, education, personal details like nationality, date of birth, languages you know into very top of your CV is not a good idea. Most details other than your name and contact info will just distract people from understanding your experience. If you really want to put that kind of less relevant information, move them to end of the CV.</p>

<h1 id="first-barrier-passing-hr-review">First barrier: Passing HR review</h1>

<p>Your CV must survive 2 main levels of review. First level is HR, and if not eliminated, then it can progress to next level, technical people. Unless you never apply without a referral, you have to understand how HR works. Here are some techniques I guess HR are using.</p>

<p>Since HR has lower level of understanding what you do, they more often depend on matching keywords and estimating your experience level based on written experience.</p>

<p>There is a significant importance about putting as many keywords as you can into your CV because<br>
some companies and recruiters uses automatic CV scanning softwares which are highly depending on keyword matching. Or recruiters sometimes use queries like this one (try to paste it into linkedin search box):<br>
<code>docker AND (mesos OR marathon OR kubernetes OR k8s) AND (ansible OR chef OR puppet OR salt OR saltstack) AND (python OR ruby OR perl OR golang OR nodejs) AND bash AND (c OR c++ OR java OR scala) AND (nfs OR glusterfs OR ceph) AND (gitlab OR gocd OR rundeck OR bamboo OR jenkins)</code></p>

<h1 id="second-barrier-passing-technical-guy-review">Second barrier: Passing technical guy review</h1>

<p>Put what &quot;you&quot; did, not what your team did. And use an achiever language rather then a doer language.</p>

<p>It is more relevant to <strong>put your achievements, contributions and the differences you made in a project</strong>; rather than tasks assigned to you, your daily duties or what your team delivered. So while writing your experiences try to impress the reader and try to create excitement while people are reading your CV. Try not to tell the project or it&#39;s importance, instead tell how exactly you contributed or which parts did you own and how did you make a success story out of it.</p>

<p>You should also mention the technologies and techniques you used in your projects. This gives hint about your expertise. E.g. HyperV, VMWare, XEN or KVM, all could be called as virtualization, but if you don&#39;t mention the technology, you can&#39;t know what will the reader understand.</p>

<h1 id="less-relevant-information-last">Less relevant information last</h1>

<p>If not so relevant to your recent career you should move your education below your experience section.If someone really want to see what you studied they&#39;ll likely look for it, but this could highly triggered by your experiences. What you studied is relatively less important if you are not a new grad.</p>

<p>In development and DevOps, people usually don&#39;t care about your nationality, which languages you know (and levels for each), your age, your interests. You can completely drop them or move to bottom of your CV.</p>

<p>Similar for the courses, seminars, certificates, references. Place them to bottom.</p>

<h1 id="your-name-is-the-title-of-the-paper">Your name is the title of the paper</h1>

<p>And try to put your name big and clear on top. So if someone is searching for your CV in a pile, they can find your CV fast. This also makes people to remember your name. Don&#39;t write it in small fonts as a regular text. It&#39;s more like the title of the book, not a text with emphasis.</p>
]]></content:encoded>
      </item>
  </channel>
</rss>