<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Hybrid Entity]]></title><description><![CDATA[IT Architecture from the Trenches]]></description><link>https://midnight.engeneon.com/</link><generator>Ghost 4.48</generator><lastBuildDate>Thu, 19 Mar 2026 14:50:53 GMT</lastBuildDate><atom:link href="https://midnight.engeneon.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[The Software Development Team as a Market]]></title><description><![CDATA[<p>A Capitalist, Democratic Alternative to Agile, Waterfall, SCRUM and the Spiral Models of Management.</p><p><br>Abstract</p><p>The Problem</p><p>The Solution (An Alternative)</p><p>The Proposition<br></p><p>How does this look like in the real world? </p><p>A Plausible Scenario: Describe a plausible scenario involving the gig economy but bounded within the corporate structure</p><p>Describe</p>]]></description><link>https://midnight.engeneon.com/the-software-development-team-as-a-market/</link><guid isPermaLink="false">66a0a40b960d9c000159747e</guid><dc:creator><![CDATA[T.G Liberatore']]></dc:creator><pubDate>Wed, 24 Jul 2024 06:54:51 GMT</pubDate><content:encoded><![CDATA[<p>A Capitalist, Democratic Alternative to Agile, Waterfall, SCRUM and the Spiral Models of Management.</p><p><br>Abstract</p><p>The Problem</p><p>The Solution (An Alternative)</p><p>The Proposition<br></p><p>How does this look like in the real world? </p><p>A Plausible Scenario: Describe a plausible scenario involving the gig economy but bounded within the corporate structure</p><p>Describe the interactions between business, broker and technical engineers/software developers</p><p>Describe the technologies facilitating the interactions between business and developers and the market</p><p>Core Principles </p><p>Software Development Teams cannot keep up with Market Demands for Advanced Technology: The speed of modern technological advancement and increasing market demand is not matched by the production systems used to deliver software and technology in general and software corporations no matter how agile seem to fall short of producing effective solutions to meet the market demand.</p><p>A major contributing factor to this inability is poor resource allocation: This inefficiency is due to the inability of these management systems (Agile, Scrum, Waterfall, Spiral and their practitioners - the businesses that run them) to allocate human and technological resources efficiently in response to market demands.</p><p>Yet, highly effective resource management technologies and frameworks exist - yet to be tried: The Free Market, represented by the Capitalist Model is widely acknowledged to be the undisputed most efficient means of resource allocation system in history, yet the principles of this model are not leveraged within organisations to achieve the same level of resource allocation efficiency. </p><p>We posit that this means should indeed be leveraged to yield exactly the benefits seen in the global market system.</p><p>The Market Place of Ideas is Not Fully Democratised with current Software Development Methodologies: </p><p>While the democratic nature of the Capitalist Free Market system is widely acknowledged to yield many benefits in terms of innovation and creative freedom as well as speed of production, this aspect is hardly every leveraged in modern corporate structures even in systems as flexible as Agile or the Spiral Method and certainly not in Waterfall. </p><p>We posit that this characteristic of the modern capitalist free market system should indeed be leveraged to yield exactly the benefits seen in the global market system. There is a mismatch between the capabilities of available technology and the ability of IT and Software Development Teams to Leverage it: The capabilities of the information technology of the modern era are far beyond the capabilities of software development teams and methodologies to leverage them effectively. For example, Cloud offers resource allocation flexibility and automation features which most IT companies are unable to exploit to the maximum. Before the cloud era, systems like VMS, SUN Solaris Cluster, AIX, IBM Power-series and other &quot;Mainframe&quot; technologies possessed advanced and highly granular resource management and automation capabilities which remain un-exploited to this day.<br></p><p>New Relationships: </p><p>The Business - Software Team Relationship, </p><p>The Business - Intermediary - Software Team Complex, </p><p>Relationship between Software Development Team Members, </p><p>Relationships between Competing Teams.</p><p>The Business</p><p>The Software Development Team</p><p>The &quot;Brokers&quot;/&quot;Middlemen&quot;<br></p><p>The &quot;How&quot;: Enabling the Software Development Team Market Model</p><p>Enabling Technologies: </p><p>Dynamic Contracts, </p><p>Granular work metrics and value estimation, precision time and labor tracking. Bidding exchanges, stock value instead of KPIs. </p><p>Enabling Organisational StructuresEnabling Legal Structures: Dissolution of the Non-Compete/Exclusivity Law.</p><p><br>Strengths of the Model</p><p>From the perspective of the employee</p><p>From the perspective of the employer</p><p>From the perspective of the middleman/broker</p><p>From the perspective of the SW Developer Team</p><p>From the perspective of The Market (Customer)<br></p><p>Weaknesses</p><p>Highly Competitive (Potentially Darwinian)</p><p>Subject to volatile market forces</p><p>New Opportunities Enabled by the Model</p><p>The Death of Venture Capitalists: Dynamically Emerging and Collapsing Ventures</p><p>The Democratization of Venture Capital (GoFundMe)<br>Potential Risks</p><p>Vulnerable Employees will suffer from market volatility</p><p>Vulnerable Enterprises will suffer from market volatility<br>Long Term Evolution of the Model: The Evolutionary Horizon: Dystopia or Utopia?</p><p>The Power Relationship between Employee and EmployerCustom Knowledge vs. Brute Artificial Intelligence</p><p>The Future of Intellectual Property</p><p>Robotics and the Role of Roboticization<br></p><p>Managing Change and Countering Resistance</p><p>There is Room for all: </p><p>Traditional Enterprises will remain &quot;as is&quot; </p><p>The Model will prove itself or perish<br></p><p>Conclusion<br></p><p>Such a model is certainly possible and plausible given the current trends in market and technology</p><p>But will require specialized software and systems to enable a true market - like unto a stock exchange</p><p>Potentially empowering to both employers and employees within the context of the Gig economy</p><p>While impacting the traditional Power relationship between employee and employer</p><p>With implications for IP via the concept of &quot;Custom Knowledge&quot; unique to the individual.</p><p>Having the potential to impact society in both dystopian and utopian ways.<br></p>]]></content:encoded></item><item><title><![CDATA[Aspects of Digital Transformation: I.T Management By Spreadsheet - An Anti-Spreadsheet Manifesto]]></title><description><![CDATA[In this article I put forward the opinion (founded on a few decades of gathered experience) that using excel spreadsheets to manage modern i.t infrastructure is analogous to using Sumerian clay tablets to record the progress of goods across the modern warehouse floor. ]]></description><link>https://midnight.engeneon.com/aspects-of-digital-transformation-i-t-management-by-spreadsheet-an-anti-spreadsheet-manifesto/</link><guid isPermaLink="false">665af8552bf4ce0001c42530</guid><category><![CDATA[architecture]]></category><dc:creator><![CDATA[T.G Liberatore']]></dc:creator><pubDate>Sat, 01 Jun 2024 11:33:15 GMT</pubDate><media:content url="https://digitalpress.fra1.cdn.digitaloceanspaces.com/1h3llgo/2024/06/gilgamesh.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://digitalpress.fra1.cdn.digitaloceanspaces.com/1h3llgo/2024/06/gilgamesh.jpeg" alt="Aspects of Digital Transformation: I.T Management By Spreadsheet - An Anti-Spreadsheet Manifesto"><p>Overview</p><p>Over the course of 24 years in the IT industry I&apos;ve noticed a repeating theme in &quot;Enterprise&quot; IT teams that is worryingly &quot;backward&quot;.</p><p>The starting point for this observation is the practice in many IT teams of relying on MS Excel spreadsheets to manage a large variety of IT infrastructure tasks that should (and could) instead be automated or systematised using holistic software solutions.</p><p>In this article I put forward the opinion (founded on a few decades of gathered experience) that using excel spreadsheets to manage modern i.t infrastructure is analogous to using Sumerian clay tablets to record the progress of goods across the modern warehouse floor. </p><p>It is a &quot;mindset&quot; problem, and potentially a skills issue: &#xA0;A symptom of both an inability to automate systems effectively and a resistance to efficiency within IT infrastructure engineering teams.</p><p>The Observation:</p><p><em>There is a common practice of managing IT infrastructure and processes in Excel Spreadsheets as an alternative to putting in place automation to track I.T information and manage infrastructure holistically. </em></p><p><em>This practice hinders the improvement of I.T infrastructure and services to the point where it degrades the business&apos; ability to deliver I.T services to its customers efficiently, reliably and with integrity. </em></p><p><em>It takes a cognitive toll on IT teams and acts as a form of &quot;human resource filter&quot; in the sense: It contributes to the reasons why highly skilled talent leave the team and why less highly skilled talent who are comfortable with this method of operating stay with the organisation. </em></p><p>The Context</p><p>Given this observation, I have to qualify that the practice of managing Enterprise IT by excel spreadsheet is only truly detrimental in a certain context. This context is defined by three parameters:</p><ol><li>The profile of the IT organisation under discussion</li><li>The degree of IT automation possible in such an organisation</li><li>The Human Factor: The Skills, Capability and Mindset of the technical teams practicing IT within the organisation.</li></ol><p>First, I will provide a profile of the kind of organisation this article is aimed at:</p><ul><li>It&apos;s IT infrastructure is located in privately run data centers (&quot;on prem&quot;) and comprises a large amount of legacy IT infrastructure (e.g Solaris/UNIX Operating Systems running on SPARC, other variants of UNIX, networking, backup and storage systems more than 10 years old). </li><li> The organisation is in the initial stages of moving some of it&apos;s IT infrastructure from these datacenters to the cloud. The planning for this process is complicated by problems re-architecting legacy applications to run &quot;on the cloud&quot;.</li></ul><p>To further set context, the concept of IT Automation needs to clarified, as well the role of the &quot;human factor&quot; in IT teams. </p><ul><li>What do I mean by &quot;IT automation&quot;?</li></ul><p>In this article I use the following working definition of IT Automation:</p><p><em>&quot;Automation is the continuous process by which technology is used to make systems, processes, procedures less dependent on humans for execution, gradually reducing dependence on people for actual operations and placing them in an advisory or &quot;oversight&quot; role over systems.&quot;</em></p><p>In other words: A process of gradually moving the human being &quot;out of the loop&quot;.</p><ul><li>How is the &quot;human factor&quot; relevant? </li></ul><p>The degree of automation in an organisation is both a result of the human factor and an influence on human behaviour of people working with the automation. Briefly:</p><p>a) The skills and capabilities of the people working in an IT organisation determine the degree of automation present in the organisation - perhaps even more than the actual nature of the organisation&apos;s core business itself.</p><p>b) The degree of automation in an organisation determines how efficiently it&apos;s staff carry out IT-related tasks and the &quot;workload stress&quot; experienced by the staff. It therefore contributes psychological and social influences to IT teams.</p><p>Our Central Theme</p><p>Expanding further on our observation, it consists of three key &quot;tendencies&quot; on the part of some I.T infrastructure teams in certain types of organisations: </p><ul><li>The tendency to manually manage otherwise automatable I.T processes using spreadsheets (primarily Microsoft Excel). </li><li>The tendency to implement crude, piecemeal automation when automation can be implemented using holistic solutions designed from the ground up to support automation of process.</li><li>The tendency to resist any attempts to migrate the organisation away from a spreadsheet-based IT management methodology.</li></ul><p>In short, we identify the following theme: </p><p><em>The practice of IT Management by excel spreadsheet is detrimental to to the practice of automation, both limiting current automation practices and limiting future attempts to automate IT within the organisation by entrenching itself within the organisation.</em></p><p>During the course of this article I will enumerate the arguments I&apos;ve heard from business, management and technical staff for why this approach to automation is justified from a business and technical point of view and compare them against the real consequences of the approach as I&apos;ve experienced them over the years.</p><p>Caveat: The Speculative Nature of this Article</p><p>While this analysis is based on empirically observed data points gathered over an extensive period of time across many industries practicing IT in many geographical and cultural locations I must caution that it is not a rigorous academic study and should not be treated as such. </p><p>Potential for academic rigour exists in this area.</p><p>That said, the theme identified above is observable in reality and has concrete effects on the efficiency of IT in &quot;enterprise&quot; organisations. It is a real problem that should be addressed - regardless of whether sufficient academic rigour has been applied to it or not.</p><p>Potential Root Causes of resistance to automation</p><p>In my experience, most attempts to move IT organisations away from their (current) spreadsheet-based methodology are immediately resisted at levels of technical staff, management and to a lesser degree more senior levels of management (IT directorship).</p><p>Some of the reasons given over the years (some of those I&apos;ve personally heard when driving automation projects in traditional I.T organisations) are: </p><ul><li>Labour and Time-saving Efficiency &quot;Its easier to do it by hand&quot;:</li><li>No fundamental justification for automation (&quot;we&apos;ve always done it that way, so why automate&quot;)</li><li>We&apos;re planning to upgrade to a better system soon, so automation is a waste of time</li><li>We&apos;re planning to move to the cloud &#xA0;so automation of our current procedures is wasted</li><li>The risk of something going wrong while we automate and the risk of transitioning from our current manual method is too high (&quot;legacy lock in&quot;, &quot;analysis paralysis&quot;, &quot;fear of the unknown&quot;)</li><li>The system does not support automation (no automation APIs or interfaces).</li><li>&quot;We&apos;re too busy to automate!&quot;</li><li>We don&apos;t have the human and financial resources to automate (Organisational skills deficit)</li><li>The technology doesn&apos;t exist to automate this process (&quot;We think only an A.I or A.G.I can replace the human in this loop!&quot;)</li></ul><p>This is largely a list of invalid reasons for resisting automation although some of them may be justifiable for a short period of time in an organisation&apos;s history.</p><p>We provide a table (ironically) of thematic counterpoints to these arguments that may be expanded in depth:</p><!--kg-card-begin: markdown--><table>
<thead>
<tr>
<th>I</th>
<th>Argument</th>
<th>Counter Argument</th>
<th>Comment</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Labour and Time-saving Efficiency &quot;Its easier to do it by hand&quot;:</td>
<td>1) It&apos;s probably only easier for staff already familiar with the process. 2) It gets progressively harder as new employees are handed the task and the old ones are replaced. 3) It becomes extremely risky if the staff familiar with the process are unavailable for some reason</td>
<td>Sometimes called the &quot;Job Security Defence&quot;</td>
</tr>
<tr>
<td>2</td>
<td>No fundamental justification for automation (&quot;we&apos;ve always done it that way, so why automate&quot;)</td>
<td>The justification for automation becomes apparent when human factors lead to error</td>
<td>The implication is that the human will never make an error and that low-risk errors are worth the time wasted</td>
</tr>
<tr>
<td>3</td>
<td>We&apos;re planning to upgrade to a better system soon, so automation is a waste of time</td>
<td>Potentially a valid argument IF it is actually the case that the new system will negate the need for bespoke automation</td>
<td>Here one needs to determine if this is a merely kicking the can down the road with the intention of not automating once the new system is in place and if the new system indeed provides the automation features promised</td>
</tr>
<tr>
<td>4</td>
<td>We&apos;re planning to move to the cloud so automation of our current system is wasted</td>
<td>The inclination to automate is usually a key driver of moving to the cloud. It tends to begin in the legacy environment as a model of the automation that will eventually be done on the cloud. Claims that current automation efforts will be deferred until moves to the cloud are complete when the skills and inclination have not been demonstrated &quot;on premise&quot; should be regarded as unsubstantiated.</td>
<td>People tend to operate on the Cloud as they did in the legacy environment - if the inclination to automate wasn&apos;t part of systematic practice before it&apos;s unlikely to change without concentrated IT Governance efforts</td>
</tr>
<tr>
<td>5</td>
<td>The risk of something going wrong while we automate, and the risk of transitioning from our current manual method to automation is too high (&quot;legacy lock in&quot;)</td>
<td>This is usually an indication that the underlying process itself is not well understood and not deterministic. Identify the exact reason for this to isolate the cause of the potential risk - then address that aspect by redesign</td>
<td>IT business systems that are too risky to automate are usually poorly designed to begin with</td>
</tr>
<tr>
<td>6</td>
<td>The system does not support automation (no automation APIs or interfaces).</td>
<td>This is almost always a sign of a system so old and disfunctional that it should be replaced as soon as possible</td>
<td>few systems still in operation after 2020 don&apos;t support automation via APIs and instrumentation</td>
</tr>
<tr>
<td>7</td>
<td>&quot;We&apos;re too busy to automate!&quot;</td>
<td>The most common reason IT teams are too busy to automate are because of the chicken-and-egg situation of being too caught up in manual activities. Manual activties are time consuming and wasteful</td>
<td>Break this cycle with an automation initiative led by a new hire.</td>
</tr>
<tr>
<td>8</td>
<td>We don&apos;t have the human and financial resources to automate (Organisational skills deficit)</td>
<td>Ironically, financial and human resources are required to do manually what should be automated.</td>
<td>The actual underlying claim here is usually &quot;We can&apos;t find people smart enough to automate - for any amount of money or newly hired staff&quot;</td>
</tr>
<tr>
<td>9</td>
<td>The technology doesn&apos;t exist to automate this process (&quot;We think only an A.I or A.G.I can replace the human in this loop!&quot;)</td>
<td>The claim that a particular business carries out IT processes that are beyond automation but happen within a deterministic IT environment is a contradiction in principle</td>
<td>This is usually a complaint about the complexity of the process to be automated, which suggests analysis and simplification should be carried out as a pre-requisite to automation</td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown--><p>Lets examine some plausibly valid reasons for avoiding initiatives to automate:</p><!--kg-card-begin: markdown--><table>
<thead>
<tr>
<th>I</th>
<th>Reason</th>
<th>Comment</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>We don&apos;t have the money and resources to automate</td>
<td>Compare the costs of the current manual mode of operation with the costs of automation to validate this claim. However, if you&apos;re broke - then you&apos;re broke!</td>
</tr>
<tr>
<td>2</td>
<td>We don&apos;t have the technical skills to automate (outdated skillsets or skills gaps)</td>
<td>This seems to be an H.R problem ...</td>
</tr>
<tr>
<td>3</td>
<td>The Business and its IT infrastructure are about to be sold and we don&apos;t want to deliver any more value than what we were paid for.</td>
<td>Fair Enough. Let&apos;s hope the purchaser doesn&apos;t view this as a due diligence issue ...</td>
</tr>
<tr>
<td>4</td>
<td>The I.T department is about to be fired and outsourced and see no reason to deliver any more value than the minimum operations.</td>
<td>Fair Enough. Let&apos;s hope the purchaser doesn&apos;t view this as a due diligence issue ...</td>
</tr>
<tr>
<td>5</td>
<td>The process is so non-deterministic that there is no scientific and economically feasible means to automate (AKA &quot;the technology doesn&apos;t exist&quot;).</td>
<td>You must be running some Quantum-level tech to be unable to automate ...</td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown--><p>Case studies</p><p>So much for theory and argument but what real-world data points do I have to support the discussion thus far?</p><p><br>Below is a comparison of various companies I&apos;ve delivered consulting, implementation and operational services for in the past. Each approached automation with a different mindset, reaping different rewards from automation.</p><p>IT Automation was important for all these organisations - ultimately proving business critical - though varying in maturity across the sample set. </p><p>The total sample consist of 14 companies in 10 different industries across 5 countries in 4 continents over a span of 24 years, of which 5 are shown in the table below:</p><!--kg-card-begin: markdown--><table>
<thead>
<tr>
<th>I</th>
<th>Business Type</th>
<th>Location</th>
<th>Automation Maturity</th>
<th>Criticality to Business</th>
<th>Barriers to Automation</th>
<th>Type of Systems Automated</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Commerical ISP</td>
<td>Africa</td>
<td>Mature</td>
<td>Internet Services Continue but provisioning stops without automation</td>
<td>Overworked Staff (firefighting)</td>
<td>realtime and batch systems, small-scale configuration management</td>
</tr>
<tr>
<td>2</td>
<td>Non-Profit ISP</td>
<td>Africa</td>
<td>Basic</td>
<td>Internet Services Continue, Provisioning can continue manually</td>
<td>Unwillingness to embrace automation, inclination to operate manually</td>
<td>batch, configuration systems</td>
</tr>
<tr>
<td>3</td>
<td>Public Cloud Provider</td>
<td>Global</td>
<td>Advanced</td>
<td>Automation is central to the business service</td>
<td>none. automation is mandated</td>
<td>realtime,  batch, global scale configuration management, machine learning, analytics collection</td>
</tr>
<tr>
<td>4</td>
<td>Retail Brand Operator and store operator</td>
<td>Middle East</td>
<td>Moderate</td>
<td>Automation failure seriously slows operations</td>
<td>skill, legacy systems, legacy processes, fear of change, mindset</td>
<td>batch processing systems</td>
</tr>
<tr>
<td>5</td>
<td>eCommerce Startup</td>
<td>Middle East</td>
<td>Inconsistent (Basic, Mature)</td>
<td>Unable to operate at required scale without automation</td>
<td>Staff occupied with fire-fighting and startup MVP priorities</td>
<td>batch and realtime</td>
</tr>
<tr>
<td>6</td>
<td>eCommerce Business</td>
<td>Europe</td>
<td>Advanced</td>
<td>Unable to operate at required scale without automation</td>
<td>none. automation mandated</td>
<td>batch and realtime, medium configuration management, machine learning, analytics collection</td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown--><p>As can be seen from the table, some potential trends are suggested:</p><ul><li>Organisations who mandate automation achieve advanced automation and also achieve the advanced benefits of automation (e.g machine learning, analytics capabilities, large scale configuration management).</li><li>Organisations with an aversion to automation or skills deficit achieve Moderate and Basic levels of automation and do not achieve more than automation of batch processes and small scale configuration operations.</li></ul><p>Counter-Automation Culture</p><p>Along our way to identifying the systemic root causes leading to poor automation in an organisation we find ourselves looking at &quot;mindset&quot; or the &quot;human factor&quot; more closely:</p><p>The symptom of I.T management by excel spreadsheet is an indication of a deeper problem: &quot;Fear of Automation&quot;, which manifests itself as a culture in I.T teams which against initiatives to improve the efficiency of I.T systems.</p><p>This problem is known by its characteristic manifestations in an I.T team:</p><ul><li>The strongly held belief that its more efficient to do a rare task manually than to spend much time automating it.</li><li>The fear that automating a system or process might increase the impact of potential issues</li><li>A general inability (or disinclination) to plan out the complete set of automation scenarios and evaluate the relative merits of each.</li></ul><p>I call this &quot;counter automation culture&quot; because once it establishes within an IT department it becomes part of the shared culture of the entire department. </p><p>It becomes even more recognisable as a &quot;culture&quot; when new employees join the department and meet with a behavioural &quot;wall of resistance&quot; to any initiatives to automate.</p><p>At this point the problem ceases to be a technological one and becomes a social issue (&quot;Human Factor&quot;). This brings me to Conway&apos;s Law and it&apos;s influence on IT in organisations:</p><p>The Influence of Conway&apos;s Law</p><p>The structure of an organisation has a substantial influence over its inclination and ability to automate.</p><p>Conway&apos;s Law describes a tendency of the organisations structure to influence the design of the systems they build.</p><p>In short, it allows one to conclude that:</p><ul><li>If &#xA0;company is rigidly hierarchical in structure it is unlikely to build effective microservice applications</li><li>If company is highly autonomous, distributed in structure it is unlikely to have much success in building rigidly monolithic, strictly defined applications</li><li>If company is poorly organised, the boundaries &#xA0;between functional areas, policies, processes sometimes rigid and in other places poorly defined or absent, the software they build will be similarly inconsistent in its design.</li></ul><p>Since the consistency and structure of software systems determines how well they can be automated, Conway&apos;s law influences the quality and degree of automation one can expect from a given organisation.</p><p>Takeaway: One should be able to predict the quality of automation present in an organisation by getting a first glance at its organisational structure an dynamics.</p><p>Compare and Contrast: I.T Organisations that do automate vs. those that don&apos;t.</p><p>One way to assess the validity of the ideas we&apos;ve put forward thus far is to compare the behaviour of organisations that drive automation versus those organisations that substitute &quot;spreadsheet based management&quot; for automation.</p><p>This comparison can be done along multiple axes of comparison for a complete picture:</p><ul><li>Large companies vs. Small Companies</li><li>Cloud Companies vs. On Premise companies</li><li>HFT Trading companies vs. Everyone else</li></ul><p>It should quickly become clear that high performing organisations do not allow their IT automation to be dominated by manual &quot;spreadsheet based&quot; methods involving tools like MS Excel. Organisations that depend on scale and adaptivity to remain competitive would be least likely to substitute spreadsheet based IT management for automation methods.</p><p>Recap: Why Automate? What are the consequences of not automating?</p><p>In light of what&apos;s been discussed so far in this article it might be useful to take a pause and ask ourselves if any of this matters at all? </p><p>Specifically: &quot;Why bother with automation at all? Is it important?&quot;.</p><p>Here we take a business perspective to understand the influence of the global market on business competitiveness ...</p><p>In short: </p><p>Automation of I.T provides an organisation with competitive capabilities that influence the entire operations of the business. It determines how fast an organisation can bring services to market and how efficiently it can operate those services without losing profits through overhead.</p><p>Against competitors in the market, an organisation&apos;s I.T sophistication can determine how much market share a business loses or wins and perhaps whether it continues as a viable competitor.</p><p>The Cure</p><p>Having identified &quot;IT Management by spreadsheet&quot; as a symptom of a deeper problem, I&apos;d like to propose a series of &quot;cures&quot; for the disease of &quot;counter automation culture&quot; &#xA0;in increasing order of risk:</p><ul><li>The direct approach: Cure the disease, not the symptom (i.e don&apos;t ban MS Excel just yet, first do thorough process analysis of the automation needs)</li><li>Remove the risk of automation (&quot;De-risking&quot;) by upgrading systems, refining and stabilising processes and capturing undocumented procedures in documentation.</li><li>Remove all the &quot;non-automatable&quot; systems (embark on technology refresh, transformation and upgrade) and replace with systems that offer good options for automation and integration</li><li>Institute new formal technological practices, approaches and certifications as an incentives to learn and &#xA0;drive automation. Include automation as a KPI, OKR or other staff or team performance metric.</li><li>Sometimes you just need to fire all the backward people - carefully</li><li>Restructure Departments and Teams appropriately (Using Conway&apos;s Law as a guide)</li></ul><p>Conclusion</p><ul><li>The symptom of I.T management by spreadsheet represents a failure of an organisation to adopt the mindset and culture of continuous improvement through automation. It is retrograde in character.</li><li>It means that engineers who have been entrusted with developing excellent infrastructure have instead opted to replace engineering with a form of &quot;spreadsheet based bureaucracy&quot;.</li><li>While this form &#xA0;of administrative bureaucracy may be minimally successful at keeping I.T infrastructure operational, it can never lead to continually improving service quality and will fail to meet the rising demands of a global market.</li></ul>]]></content:encoded></item><item><title><![CDATA[Chaos Engineering is for Optimized Systems Only]]></title><description><![CDATA[Chaos Engineering can be a waste of time ...]]></description><link>https://midnight.engeneon.com/in-my-technical-opinion-chaos-engineering-is-for-optimized-systems-only/</link><guid isPermaLink="false">659e96d506f62800017d21ad</guid><category><![CDATA[opinion]]></category><dc:creator><![CDATA[T.G Liberatore']]></dc:creator><pubDate>Wed, 10 Jan 2024 13:10:51 GMT</pubDate><content:encoded><![CDATA[<p>Chaos Engineering should not be attempted on systems which have not already been engineered to a high degree of stability and uptime.</p><p>In fact, there should be a threshold of availability and stability before which chaos engineering should not even be considered.</p><p>Otherwise it becomes a waste of engineer time and business resources.</p>]]></content:encoded></item><item><title><![CDATA[Cloud Infrastructure as Code (IaC): Automating IP Address Management]]></title><description><![CDATA[<h3 id="context">Context</h3><p>In the modern era cloud infrastructure is often deployed using declarative code like Terraform or Bicep or less declartively using Cloud Development SDKs e.g &quot;Pulumi&quot; (TM).</p><p>This includes network topologies built from cloud IaaS components like VNETs (with private IP address space), Virtual Network Gateways, Cloud-to-Datacenter</p>]]></description><link>https://midnight.engeneon.com/untitled/</link><guid isPermaLink="false">659d453706f62800017d1fc4</guid><dc:creator><![CDATA[T.G Liberatore']]></dc:creator><pubDate>Tue, 09 Jan 2024 14:35:28 GMT</pubDate><content:encoded><![CDATA[<h3 id="context">Context</h3><p>In the modern era cloud infrastructure is often deployed using declarative code like Terraform or Bicep or less declartively using Cloud Development SDKs e.g &quot;Pulumi&quot; (TM).</p><p>This includes network topologies built from cloud IaaS components like VNETs (with private IP address space), Virtual Network Gateways, Cloud-to-Datacenter links (e.g ExpressRoute), Virtual Network peering links, &quot;service endpoints&quot; and Public facing IP addresses to communicate with services on the internet.</p><p>For large organisations (including &quot;Enterprises&quot;) the management of private address space is complex enough &quot;on premises&quot;, let alone in the cloud. </p><p>Usually a dedicated networking team has to be involved in assignment, provisioning, reclamation, renumbering of IP address space to cloud teams. </p><p>This task is often carried out by a dedicated networking team using manual or semi-manual tools ranging from Excel Spreadsheets to an enterprise product like InfoBlox (TM) IPAM. </p><p>The management of DNS records is often related to this process if a centralised tool like InfoBlox is in place.</p><p>Cloud Engineers then consume the networking information managed with these tools to configure IaC for the cloud using a separate workflow.</p><h3 id="problem-statement">Problem Statement</h3><p>The management of IP address space in the cloud is an ongoing, often manual, iterative task involving mistakes and a degree of trial and error in getting address space correctly subnetted, assigned, configured.</p><p>When this process is required to configure infrastructure code for the cloud (e.g Terraform IaC) it can result in code errors and deployment failures due to increasing complexity of configuring the code to reflect the right IP address assignments.</p><p>All this would be manageable if the infrastructure code (often written in terraform or bicep), automation (CI/CD pipelines) infrastructure &quot;state&quot; were robust and recovered from configuration errors easily.</p><p>However, this is emphatically not the case in general for the cloud. The following problems are common:</p><ul><li>Networking configuration errors lead to IaC deployment failures at deployment time in the cloud even if successfully passing tests prior to actual cloud deployment, simply because of hidden business rules &#xA0;of the Cloud Platform Vendor. </li><li>Recovery from those errors loses hours or days of DevOps team manpower. </li><li>In a complex organisation with an extensive cloud hub-and-spoke network topology the errors can add up to days of downtime in a given year resulting in service downtime and frustrated DevOps engineers.</li><li>To sum up: The more complex an organisation&apos;s networking requirements are in the cloud the greater the need for a reliable system of managing IP address pools in code.</li></ul><h3 id="what-is-the-ask">What is the ask?</h3><p>As a Network Manager responsible for managing address space across multiple clouds, I want to be able to provision a pool of address space to VNETs, subnets, interfaces, physical links, gateways on the cloud flexibly and track the utilisation by teams and projects and ownership of address space.</p><p>I want primarily to:</p><ul><li>Provision and reclaim address space as applications are deployed and removed from the cloud, </li><li>Track the history of ownership </li><li>Make IP address space available to Infrastructure Code via automated API queries. </li></ul><p>In addition to these core requirements (or &quot;wants&quot;):</p><ul><li>I want smooth integration of IP address Provisioning with CI/CD pipelines and network automation tools without having to manually do subnet calculations every time I assign a range from the reserved pool of available IP addresses. </li><li>I would prefer Cloud Application or Engineering teams manage the detailed subnet allocations in the cloud for their specific applications directly rather than relying on the Network Team.</li></ul><p>What I want to avoid:</p><ul><li>I do not want to maintain IP address records using a system of documentation based on MS Excel or notepad on the specifics of address assignment in the cloud. While these documents should be available on demand, they should not be the core means of managing IP address space in the organisation.</li><li>I do not want to buy a million dollar Infoblocks solution to manage my entire organisational address space - I have enough infrastructure to maintain as it is.</li></ul><h3 id="what-problem-does-this-solve">What Problem Does This solve?</h3><p><strong>Reduction of Administrative Errors:</strong></p><p>Management of IP addresses in a consistent, reliable and error free way prevents automation failures when provisioning IP addresses to cloud infrastructure due to address space overlap. &#xA0;Moreover, it reduces the cascading Infrastructure Code bugs and failures which result in application downtime when mistakes with IP address assignment inevitably happen.</p><p><strong>Improvement of configuration consistency:</strong></p><p>The IPAM solution would also carry out tasks like ensure IP address ranges remain contiguous as they are allocated/de-allocated in a VNET/VPC.</p><p><strong>Smoother, Faster Cloud Deployment Workflow:</strong></p><p>Automating the provisioning and assignment of address space as well as the consistent management of addresses speeds up the cloud deployment workflow and reduces the I.T workload on teams.</p><h3 id="what-value-does-this-add">What Value Does This Add?</h3><p>Increased reliability in IP address management leads to more resilient Cloud automation and infrastructure overall, which in turn saves costs and time spent maintaining infrastructure. Unifiying the IP address management workflow and the cloud configuration workflow speeds up operational processes and delivery dramatically (if done right!). </p><p>This reliability and operational efficiency can be achieved with <em>effective</em> automation (note the emphasis!).</p><p>The advantages for automation (including self service) seem compelling in terms of opportunities for operational agility:</p><ul><li>Networking and Infrastructure Teams could pre-provision broad ranges of address space in the IP address database &#xA0;for the Cloud following which Cloud Engineers could allocate or de-allocate ranges &quot;programmatically&quot; to teams, functions, applications, other organisations.</li><li>Application teams deploying infrastructure to our cloud could access IP address provisioning information to configure their Infrastructure Code in the deployment pipeline at run time, or prior. They could access the correct IP address ranges for any given application programmatically without needing the help of Cloud Infrastructure Engineers. All of this could be part of a single unified automation workflow instead of separate processes requiring manual coordination between teams.</li></ul><h3 id="build-tasks-a-quick-brainstorm">Build Tasks: A quick brainstorm</h3><p>How would we build this &quot;Cloud Native IPAM Solution&quot; ?</p><p>Because we&apos;re &quot;agile&quot; we jump straight into thinking about what we should build. After all, who in this era has time for &quot;architecture&quot; right???</p><p>The following key components spring to mind at first glance:</p><pre><code>&#x2022; IPAM IP calculation engine (IP address calculator)
&#x2022; IPAM query and update interface (API, CLI and/or U.I)
&#x2022; IP Address space tracking database of some sort
&#x2022; Provisioning/Deprovisioning workflow along the lines of a CI/CD pipeline
&#x2022; Integration with IaC through templating engine
</code></pre><p>At a very high level, our &quot;Cloud Native IPAM Solution&quot; fits into the enterprise as pictured below:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://digitalpress.fra1.cdn.digitaloceanspaces.com/1h3llgo/2024/01/ipam-highlevel.drawio.png" class="kg-image" alt loading="lazy" width="894" height="511"><figcaption>Integration of the IPAM service into existing cloud context</figcaption></figure><h3 id="a-little-architecture-wouldnt-hurt-would-it">A little Architecture Wouldn&apos;t Hurt, Would it?</h3><p>Well, maybe we shouldn&apos;t dismiss &quot;Architecture&quot; so hastily, after all this thing is beginning to look a little complicated! </p><p>Let&apos;s draw some pictures!</p><p>At a slightly more detailed systems level, we envision the components of the IPAM solution to have the following interrelationships:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://digitalpress.fra1.cdn.digitaloceanspaces.com/1h3llgo/2024/01/ipam-iac-automation.drawio.png" class="kg-image" alt loading="lazy" width="1000" height="783"><figcaption>IP Address Management for Infrastructure-as-Code Implementations</figcaption></figure><p>The solution can be described as follows by enumerating it&apos;s components (in summary):</p><ol><li><strong>Data Model:</strong> Provides an abstract scheme for collecting all the properties of IP addresses and address spaces and their interrelationships as one consistent information structure.</li><li><strong>Business Logic Engine: &#xA0;</strong>Program logic which applies the correct rules for operating on the Data Model for IP addressing. This could be code which provides operations for manipulating the information in the data model according to allowed rules.</li><li><strong>Relational Schema: </strong>The SQL relational schema implementing the data model in a format which can be represented in a database.</li><li><strong>Relational DB: </strong>An actual physical database which hosts the relational schema of the data model and provides features for managing the information and the model programmatically.</li><li><strong>Database Storage: </strong>The physical storage medium which will be used by the relational database software to store the actual ip addressing data.</li><li><strong>API: </strong>An interface which allows programmatic access to the features of the data model and the information within the data model. This would include commands to allocate ip address ranges, search for available addresses, decommission address space/ranges, assign to applications and teams and check the status of addresses. The API would be consumed by users and pther scripts via web interfaces, CLIs and internet protocols like REST, Websockets, gRPC. </li><li><strong>Deployment Automation Process: </strong>An automated process (e.g script, ci/cd pipeline, a.i agent, management service) which consumes IP address information to configure software running in The Cloud or generate configuration code like &quot;Terraform&quot; and &quot;Bicep&quot; (IaC DSL).</li><li><strong>User Portal: </strong>The User Portal provides a very lightweight interface to inspect the IP address information in the IPAM database, annotate IP address information and carry out CRUD operations on it.</li></ol><p><strong>Getting Down to Nuts and Bolts</strong></p><p>After this rather &quot;lean&quot; design exercise, we should be ready to translate our moderately abstract description of the solution into actual technologies and components which could be used to built a prototype.</p><p>To do this we transform our eight-point description into a more detailed specification of the system:</p><ol><li><strong>Data Model:</strong> Provides an abstract scheme for collecting all the properties of IP addresses and address spaces and their interrelationships as one consistent information structure.</li><li><strong>Business Logic Engine: &#xA0;</strong>Program logic which applies the correct rules for operating on the Data Model for IP addressing. This could be python script code which provides definitions wrapping operations for carrying out subnetting calculations, update of address metadata (CRUD). </li><li><strong>Relational Schema: A </strong>SQL relational schema implementing the data model in a format which can be represented in a database. </li><li><strong>Relational DB: </strong>An actual physical database which hosts the relational schema of the data model and provides features for managing the information and the model programmatically.We should prefer a &quot;service-less&quot; database to reduce the number of moving parts in the solution perhaps SQLite, but if needed PostGreSQL.</li><li><strong>Database Storage: </strong>The physical storage medium which will be used by the relational database software to store the actual ip addressing data. The type of storage medium (spinning rust, solid state/RAM) would be influenced by the degree to which we need the solution to scale.</li><li><strong>API: </strong>An interface which allows programmatic access to the features of the data model and the information within the data model. This would include commands to allocate ip address ranges, search for available addresses, decommission address space/ranges, assign to applications and teams and check the status of addresses. The API would be consumed by users and pther scripts via web interfaces, CLIs and internet protocols like REST, Websockets, gRPC. </li><li><strong>Deployment Automation Process: </strong>Here we&apos;d want to build IP address configuration to be used in Terraform IaC for the deployment of user VNETs/VPCs and the subnet layout in them.</li><li><strong>User Portal: </strong>The User Portal provides a very lightweight interface to inspect the IP address information in the IPAM database, annotate IP address information and carry out CRUD operations on it. This could be implemented using a containerized web frontend like Flask/ Django etc ...</li></ol><h3 id="now-let-us-build">Now Let Us Build!</h3><p>We have enough of a picture of our IPAM solution in mind by now to implement a prototype/PoC and test if it integrates with Terraform infrastructure Code for our Azure Cloud Network Implementation.</p><p>The following tasks (in no particular order) will be the key steps in our build process:</p><ul><li>We will structure our Terraform code to consume IP address configuration provided by an external IP db, perhaps via environment variables in a deployment pipeline (Jenkins?) or by direct query to the IPAM API.</li><li>We will implement a CI/CD pipeline to extract the IP addresses and metadata from IPAM via an API and configure it in Terraform IaC for specific deployments.</li><li>We will implement a data model, translate it to a SQL schema, configure it in an RDBMS.</li><li>We will implement a &quot;rules engine&quot; with an API to operate on the data in the db according to the data model.</li><li>We will implement a lightweight API to allow programmatic access to the functions in the data model from external scripts and systems.</li><li>We will implement a lightweight web U.I to carry out all operations on the data in the data model to maintain the IP address data.</li></ul><h3 id="but-first-principles">But first: Principles</h3><p>Hold on. Not so fast.</p><p>We should settle on architectural guiding principles to evaluate and ensure the success of our implementation, define expectations of the solution and build efficiently:</p><ul><li>Lean Architecture: Design the MVP first</li><li>Scalability not a concern at this point</li><li>Agile Implementation: Deliver a working solution quickly, simply.</li><li>Evaluate based on fitness for purpose in an actual usage scenario</li><li>Robust implementation: Avoid fragile components and design even at the cost of features.</li><li>Containerize and package the entire self-contained solution in a highly modular fashion.</li><li>COTS technologies with minimal code</li><li>Monolithic design in the MVP to aid &quot;self contained packaging&quot;</li></ul><p>Understanding the approach, we can now move on to the actual implementation ...</p><h3 id="the-implementation">The Implementation</h3><p>We begin with the Data Model, without which the design is essentially meaningless. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://digitalpress.fra1.cdn.digitaloceanspaces.com/1h3llgo/2024/01/cloud-ipam-datamodel.drawio.png" class="kg-image" alt loading="lazy" width="1192" height="830"><figcaption>Minimum Viable Data Model (Conceptual Representation)</figcaption></figure>]]></content:encoded></item><item><title><![CDATA[Conway's Law in Enterprise Digital Transformation Projects]]></title><description><![CDATA[<p>Abstract:</p><p></p><p>In brief: Conway&apos;s law states that the software and systems design an organisation produces tends to resemble, in architecture, it&apos;s organisational structure.</p><p>After two decades of digital transformation projects it&apos;s been my personal experience that Conway&apos;s law becomes especially visible during</p>]]></description><link>https://midnight.engeneon.com/conways-law-in-enterprise-digital-transformation-projects/</link><guid isPermaLink="false">64c3e2a6998b6700013bef87</guid><dc:creator><![CDATA[T.G Liberatore']]></dc:creator><pubDate>Fri, 28 Jul 2023 15:53:33 GMT</pubDate><content:encoded><![CDATA[<p>Abstract:</p><p></p><p>In brief: Conway&apos;s law states that the software and systems design an organisation produces tends to resemble, in architecture, it&apos;s organisational structure.</p><p>After two decades of digital transformation projects it&apos;s been my personal experience that Conway&apos;s law becomes especially visible during the process of migrating an organisation to a new technological paradigm or to an improved version of it&apos;s current paradigm. </p><p>Most importantly, without awareness of conway&apos;s law during the digital transformation process one can fall afoul of it&apos;s implications they are ignored in one&apos;s planning.</p><p></p><p>....</p>]]></content:encoded></item><item><title><![CDATA[A Simple URL Shortening Service]]></title><description><![CDATA[<h2 id="problem-statement">Problem Statement</h2><p>The customer requires a simple API-based service for a) shortening URLs (or otherwise encoding them) and b) retrieving the encoded/shortened URLs for lookup via a client browser (or bulk lookup tool ...).</p><p>(Don&apos;t ask me why such a thing would be useful - bit.ly springs</p>]]></description><link>https://midnight.engeneon.com/a-simple-url-shortening-service-2/</link><guid isPermaLink="false">60e7a1f25013bf0001de14be</guid><dc:creator><![CDATA[T.G Liberatore']]></dc:creator><pubDate>Fri, 09 Jul 2021 01:12:29 GMT</pubDate><content:encoded><![CDATA[<h2 id="problem-statement">Problem Statement</h2><p>The customer requires a simple API-based service for a) shortening URLs (or otherwise encoding them) and b) retrieving the encoded/shortened URLs for lookup via a client browser (or bulk lookup tool ...).</p><p>(Don&apos;t ask me why such a thing would be useful - bit.ly springs to mind)</p><p>Let&apos;s begin with the high level architectural design and gradually drill down to implementation in code.</p><h2 id="system-description">System Description</h2><p>The System implements the following functionality:</p><pre><code>&#x2022; Accepts an HTTP/S POST  to shorten (encode) a URL from a browser client to an API endpoint.
&#x2022; The API accesses functions to encodes the URL, stores it in a Redis database key-value table for later lookup
&#x2022; A Separate API service accepts an HTTP/S GET request for the shortened URL, look-up the shortened URL in the Redis table and returns the decoded URL in an HTTP/S 302 REDIRECT response to the client browser. This allows the user client to access the decoded URL via http/s
</code></pre><p>The System design achieves the following non-functional requirements in the described way:</p><figure class="kg-card kg-image-card"><img src="https://digitalpress.fra1.cdn.digitaloceanspaces.com/1h3llgo/2021/07/Screenshot-2021-07-09-at-9.02.42-AM.png" class="kg-image" alt loading="lazy" width="2222" height="854"></figure><p>Architectural Diagram</p><figure class="kg-card kg-image-card"><img src="https://digitalpress.fra1.cdn.digitaloceanspaces.com/1h3llgo/2021/07/system-design-exercise-bitly.png" class="kg-image" alt loading="lazy" width="1236" height="1739"></figure><h2 id="continuous-delivery-automation">Continuous Delivery Automation</h2><p>To achieve Continuous Delivery or Continuous Deployment the following deployment pipeline is included in the design:<br>A Jenkins pipeline is defined and implemented using Jenkins declarative pipeline script to deploy the Infrastructure of the system as Code (IAC):</p><pre><code>a) Terraform is used to create the AKS cluster, Redis cache service, Azure Load Balancer, DNS CNAME records. This code is versioned in a git repository
b) The Kubernetes Pod and ingress configuration is defined as Helm charts also stored in a separate Git Repo
c) Credentials and Secrets required by the service for accessing the Redis DB and other secure services are stored in a credential vault accessed by Jenkins pipeline to embed in the deployment. This ensures no secrets are stored in the git repos.
d) The Jenkins deployment pipeline itself is defined using either Jenkins Pipelinescript  or Groovy directly and kept as versioned code in Git.
e) &quot;Continuous Integration&quot; is implemented to improve quality of the deployment code by triggering a build of a development/staging system every time changes are made to a git master branch.
f) The Redis DB Schema is versioned git as code and deployed as a separate stage of the pipeline
</code></pre><h2 id="redis-database-table-schema">Redis Database Table Schema</h2><p>Shortened URL code field name<br>Actual URL Code field name<br>shortURL<br>trueURL</p><h2 id="url-shortening-mechanism">URL Shortening Mechanism</h2><p></p><p>The URL shortening method is to run the URL through a HMAC sha1 hashing function with a random salt to increase chances of uniqueness.</p><p>Here we use HMAC-SHA1 because it is a one-way hashing function and does not allow the url to be decoded from the hash itself (you need to look it up in the Redis table)</p><p>$hash = hmac_sha1($random_salt , $my_url);</p><h2 id="core-service-components">Core Service Components</h2><pre><code>1. The API Servers

&#x2022; The two API servers are implemented as HTTP Servers in Golang or Python
&#x2022; These handle PUT and GET requests separately
&#x2022; The services are running in docker containers with separate pods
&#x2022; An API server receives the PUT request to encode a URL and looks up the URL
&#x2022; If URL is found in the Redis table, the API returns: &quot;Duplicate&quot; in response
&#x2022; If URL is not in the Redis table, it encodes it using a simple shortening algorithm
&#x2022; returns HTTP 200 OK with the short URL only on successful write to the Redis table
&#x2022; An API server receives a GET request including the shortened URL and does a lookup
&#x2022; if the short url is found in the table, the original URL is returned using a query
&#x2022; This URL is formatted into a 302 redirect response by the API HTTP Service</code></pre>]]></content:encoded></item><item><title><![CDATA[Portal]]></title><description><![CDATA[members only portal
]]></description><link>https://midnight.engeneon.com/portal-2/</link><guid isPermaLink="false">60d0bc67509c68000183ef41</guid><dc:creator><![CDATA[T.G Liberatore']]></dc:creator><pubDate>Mon, 21 Jun 2021 16:22:39 GMT</pubDate><media:content url="https://digitalpress.fra1.cdn.digitaloceanspaces.com/1h3llgo/2021/06/portal.jpg" medium="image"/><content:encoded/></item><item><title><![CDATA[Design and Build a CMDB for Cloud Infrastructure]]></title><description><![CDATA[<p></p><ol><li><strong>The Scenario</strong></li></ol><p>A customer contracted me to develop an automated, code-based framework for supporting operations in The Cloud. </p><p>In the I.T industry this is generally called &quot;Landing Zones&quot; or &quot;Cloud Foundations&quot; and &#xA0;is used to support the &quot;Cloud Standard Operating Environment&quot;.</p><p>In</p>]]></description><link>https://midnight.engeneon.com/building-a-cmdb-for-cloud-infrastructure-automation/</link><guid isPermaLink="false">60ced440509c68000183ef14</guid><category><![CDATA[architecture]]></category><dc:creator><![CDATA[T.G Liberatore']]></dc:creator><pubDate>Sun, 20 Jun 2021 05:39:02 GMT</pubDate><media:content url="https://digitalpress.fra1.cdn.digitaloceanspaces.com/1h3llgo/2021/06/cmdb-source-of-truth-2.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://digitalpress.fra1.cdn.digitaloceanspaces.com/1h3llgo/2021/06/cmdb-source-of-truth-2.jpg" alt="Design and Build a CMDB for Cloud Infrastructure"><p></p><ol><li><strong>The Scenario</strong></li></ol><p>A customer contracted me to develop an automated, code-based framework for supporting operations in The Cloud. </p><p>In the I.T industry this is generally called &quot;Landing Zones&quot; or &quot;Cloud Foundations&quot; and &#xA0;is used to support the &quot;Cloud Standard Operating Environment&quot;.</p><p>In short, the customer&apos;s infrastructure in The Cloud would be defined using &quot;Infrastructure as Code&quot;, deployed and configured using CI/CD pipelines and automation and managed using a DevOps/SRE philosophy.</p><p>In simple terms, my task resolved to:</p><p>&quot;Write a complex collection of Terraform, BASH and Jenkins pipelines to define Cloud infrastructure and security guardrails, deploy resources to the cloud and configure the infrastructure thereafter. Make this code integrate with existing authentication systems, networks and security systems. Make it support production and non production environments across all business units of the customer&apos;s organisation.&quot; </p><p>This approach of managing Cloud infrastructure &quot;as code&quot; with maximal automation is a departure from an earlier era of manually or semi-scripted management of Cloud operations by teams of I.T staff. It aspires to bring about a fully automated, fully &quot;code defined&quot; cloud operating model for businesses who have their I.T infrastructure and applications hosted in the cloud. </p><p>It maximises the value, purpose and capabilities of &quot;The Cloud&quot;.</p><p><strong>2. Existential Questions</strong></p><p>Given that the task is to develop &quot;Infrastructure as Code&quot;, what has CMDB got to do with all of this? Do we need a CMDB at all? What value overall does it offer? What are the alternatives?</p><p>When writing infrastructure code for large, complex organisations (not startups or small businesses!) the situation inevitably arises that large amounts of complex configuration needs to be maintained for the infrastructure code in the cloud, a few examples are:</p><ul><li>IP addressing information for subnets assigned to applications</li><li>Cloud accounts, projects or subscriptions assigned to applications/teams or business units</li><li>Environment Domain and environment profiles for each application (prod/non-prod, sit/uat/dev/sat etc ...)</li><li>Access profiles for various application teams and users in the cloud</li><li>Integration information for Infrastructure code to access API endpoints providing additional configuration services</li><li>The actual provisioning and status of resources/services in the cloud relative to applications and teams using them.</li></ul><p>Very soon, one finds that this information cannot be efficiently included in the Infrastructure Code as variables or parameters in flat configuration files while still keeping the codebase as &quot;abstracted&quot; as possibled.</p><p>Another requirement that develops is that other systems within in the organisation want to access the shared configuration, or perhaps are the &quot;single source of truth&quot; for some configuration information. For example, perhaps IP addressing information is maintained by a networking team in an IPAM of sorts and this information must be regularly refreshed.</p><p>Some kind of actively maintained centralised database of configuration data is required in this case. A database perhaps, with a frontend and automation capable of integrating configuration data, which could be used to supply the configuration parameters for abstracted infrastructure code.</p><p><strong>3. Philosophy of Approach</strong></p><p>Before investing effort into implementing a CMDB it made sense to think more deeply about it from a &quot;systems&quot; perspective:</p><ul><li>The &quot;meaning&quot; and implications of CMDB in I.T infrastructure</li><li>The importance of Agility, Simplicity, Flexibility, Robustness</li><li>The &quot;Minimal Viable Product&quot;</li></ul><p></p><p><strong>4. The Scope of the Task</strong></p><p>The scope of the CMDB implementation has to be clearly understood, defined and documented (yes, in that order!) before any <em>formal</em> implementation is done. The primary reason for this is to ensure the best work can be focused on the parts of the CMDB solution that matter most to it&apos;s eventual users. </p><ul><li>Context is Key</li><li>Defining the MVP </li><li>The needs of the moment vs. the needs of the future vs. the needs of the past</li><li>The needs of the many vs. the wants of the few</li></ul><p><strong>5. Dividing and Conquering</strong></p><p><em>&quot;The Whole is Greater than the Sum of its Parts&quot; - Aristotle</em></p><ul><li>The Sum of the Parts</li><li>The Parts</li><li>The Whole</li></ul><p><strong>6. The Build</strong></p><p>&quot;A journey of a thousand miles begins with a single step&quot; &#x2013; The Tao</p><p>Building a CMDB, no matter how &quot;lean&quot; we attempt to architect it is always a complex and labour intensive task, primarily because it is a bespoke integration task tailored to the organisation we&apos;re building it for. Yes, there are some common elements but ultimately there is always a requirement to integrate with at least a few legacy elements of the customer&apos;s infrastructure and some unique legacy choices they&apos;ve made. These few variations are enough to permute to a significant amount of complexity in practice.</p><p>However, this can all be mitigated by breaking the implementation into tiny, manageable tasks using the SCRUM methodology.</p><p><strong>7. Was it all worthwhile? The Retrospect</strong></p><p><em>&quot;Was it all worth it yeah yeah, giving all my heart and soul<br>Staying up all night, was it all worth it<br>Ooh living breathing rock &apos;n&apos; roll this never ending fight<br>Was it all worth it, was it all worth it<br>Yes it was a worthwhile experience<br>Ha ha ha ha haa<br>It was worth it<br>Ha ha&quot; &#x2013; Queen</em></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Welcome to "I.T Architecture from the Trenches"]]></title><description><![CDATA[Inaugural post for "I.T Architecture from the Trenches"]]></description><link>https://midnight.engeneon.com/welcome-2/</link><guid isPermaLink="false">60cec493509c68000183eeb0</guid><category><![CDATA[architecture]]></category><dc:creator><![CDATA[T.G Liberatore']]></dc:creator><pubDate>Sun, 20 Jun 2021 04:41:11 GMT</pubDate><media:content url="https://digitalpress.fra1.cdn.digitaloceanspaces.com/1h3llgo/2021/06/hope-vision.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://digitalpress.fra1.cdn.digitaloceanspaces.com/1h3llgo/2021/06/hope-vision.jpg" alt="Welcome to &quot;I.T Architecture from the Trenches&quot;"><p>Welcome to &quot;I.T Architecture from the Trenches&quot; a blog and newsletter dealing with the design and implementation of actual I.T Solutions in The Cloud and elsewhere.</p><p>Here you will find retrospective and limited prospective reflections and analyses on &quot;real world&quot; I.T projects we have implemented or are in process of implementing. </p><p>This site is focused on our real and direct experiences in implementing complex I.T services and systems in the public and private cloud. </p><p>We document the design challenges, solutions and detailed implementation complexities of actual systems and their mechanisms. </p><p>We try to avoid theorising about systems that might be built, or even how they might be built better (we leave that to you, dear reader) and we focus on how, ultimately, a system *was* built or is in process of *being* built and what the actual results of the build was.</p><p>As one might infer from the previous statements the guiding principle for articles on this site is &quot;pragmatism in the present&quot;.</p><p>Another key guiding principle for our explorations in I.T is also the &quot;Systems Engineering&quot; lens: We always try to take a &quot;Systems Thinking&quot; approach to the conceptualisation, architecture, design and build of information systems.</p><p>And now, onward to the trenches!</p>]]></content:encoded></item></channel></rss>