Deixei

making dreams digital realities

Taxonomy - Metadata schema

My helpful screenshot

Taxonomy - Metadata schema

Taxonomy is a system for organising, categorising, and identifying items in a hierarchical structure based on shared features. This system is referred to as a classification system. It is a central idea in a variety of disciplines, including biology, library science, and information management, among others. The origin of the word “taxonomy” can be traced back to two Greek words: “taxis,” which means arrangement, and “nomia,” which means technique.

Taxonomy is the scientific study of recognising, describing, classifying, and naming species, which might include plants, animals, and bacteria. It is part of the field of biology. The physical and genetic properties of an organism are used in conjunction with a hierarchical classification system by biologists to place an organism into one of several groups. The Linnaean classification system is a taxonomy system that categorises creatures into kingdoms, phyla, classes, orders, families, genera, and species. This system has gained widespread recognition over the years.

Compliance

My helpful screenshot

Compliance

Compliance is an essential component of every successful business operation, and it is not limited to only the software items a company sells. To ensure that their businesses are run in a manner that is both ethical and legal, many different types of businesses have developed their own sets of regulations, standards, and laws. In today’s digital age, where data breaches and privacy issues are prevalent, compliance plays an important role in protecting the integrity and confidence of a firm.

Cloud Governance

My helpful screenshot

Cloud Governance

Cloud governance is a framework that organizations establish to manage their cloud computing environment effectively. It comprises policies, processes, and controls that ensure that cloud resources are used safely, cost-effectively, and in compliance with regulations. It is essential for companies to establish clear policies, protocols, and controls to manage cloud workloads to enhance operational efficiency, reduce risk, and support business growth and innovation. Cloud governance covers aspects such as security, compliance, cost optimization, and identity and access management. Cloud governance plays a vital role in ensuring that cloud environments are secure, compliant, and cost-effective.

Creating reference architectures

My helpful screenshot

Creating reference architectures

A RACI matrix is a tool that is used to identify the roles and responsibilities of team members in a project or process. RACI stands for Responsibility, Authority, Competence, and Information. The matrix establishes the following four roles: Responsible, Accountable, Consulted, and Informed.

Coding Dojo and Code Kata

My helpful screenshot

Coding Dojo and Code Kata

A “Dojo” is a gathering place for computer engineers in the field of software development, where they may work together to hone their programming abilities. It is often a venue for learning, experimentation, and cooperation.

A “Kata” is a specialised exercise or challenge that is aimed to assist programmers in improving their abilities via repetition and concentrated practise. Katas can take on a variety of forms. A Kata is often a brief, self-contained issue or activity that can be finished in a short amount of time. Katas are frequently used as a method to teach new strategies or investigate new approaches to problem-solving.

Understanding the concepts - Reference Architecture - Landing Zone -​ Blueprint - Code Templates​ - Scaffolding

My helpful screenshot

Understanding the concepts

In this section, we go over some basic ideas that are important for putting what you know to use in the real world. The terms “reference architecture,” “landing zone,” “blueprint,” “code templates,” and “scaffolding” are included in this group of ideas, and they are all connected to one another in some way. The following text emphasises the goals of each notion as well as its role in putting together a group that will come up with answers for an organisation. In addition, it is essential to emphasise that the ideas discussed in this article are not the same as those discussed by Microsoft in their Azure Resources section. Starting with reference architecture, which acts as a technical guide on how to solve business problems with technology, the text moves on to landing zones, blueprints, code templates, and scaffolding, providing a detailed explanation of each concept along the way. Reference architecture serves as a technical guide on how to solve business problems with technology. This section acts as a guide for readers who are interested in applying the knowledge they have gained in a practical setting, and each notion plays a vital part in the process of designing and implementing technological solutions.

A DevOps engineer story

My helpful screenshot

A DevOps engineer story

Mary was working as the DevOps engineer for a company that was expanding, and she was continually having trouble keeping up with the expectations of the company. Because the software used by the organisation was constructed using a wide variety of distinct components and approaches, it is difficult to maintain and keep up to date. To make matters even more difficult, the programme was developed with the intention of functioning across a variety of platforms and technologies, which makes it much more challenging to administer.

One day, Mary’s employer presented her with a new problem, which was that the firm was intending to switch from on premises datacentre to one cloud provider, and they wanted her to ensure that the transfer happened well. Not only did this include modernising the software so that it could operate on the new cloud platform, but it also necessitated transforming the tightly coupe service architecture of the organisation into a micro segmented design that was more adaptable.

Utilization of Power BI

My helpful screenshot

Utilization of Power BI

Microsoft Power BI is a business intelligence and data visualisation platform that enables users to build interactive reports and dashboards based on several data sources. Users may access this platform using the Microsoft Azure cloud. The following is a list of some of the most important features of Power BI:

Data connection is supported by Power BI, which allows users to connect to a broad variety of data sources, including as databases, files, and cloud-based services. It also supports real-time data connection, which enables users to develop dashboards that display data that is current at the moment it was created.

Data modelling and transformation: Power BI includes tools for shaping and transforming data, including capabilities for filtering, aggregating, and grouping data. This is part of Power BI’s data modelling and transformation functionality. In addition to this, it offers users the ability to develop individualised formulae and computations by providing support for the creation of calculated columns and measurements.

Utilization of Azure DevOps

My helpful screenshot

Utilization of Azure DevOps

Read the “Technology’s choices” article to have more context

Technology’s choices LinkedIn

Utilization of Azure DevOps

Azure DevOps is a collection of tools and services that can be used to manage the entirety of the software development lifecycle. This includes all stages of the process, from planning to development to testing to deployment. It offers tools for software testing and distribution, as well as capabilities for collaboration, code management, and continuous integration and delivery. In addition, it manages the software. The following is a list of some of the most important features offered by Azure DevOps:

Work item tracking Azure DevOps provides tools for tracking development work, including capabilities for generating and tracking work items like user stories, tasks, and defects. These tools are included in the work item tracking functionality. Work item tracking

Control of versions: Azure DevOps offers support for version control systems, including Git and Team Foundation Version Control. Version control (TFVC). It includes tools for code collaboration, such as the ability to examine the history of the code, track changes, review code, and merge code.

Preference for Bicep over the ARM

My helpful screenshot

Preference for Bicep over the ARM

Read the “Technology’s choices” article to have more context

Technology’s choices | LinkedIn

Preference for the Bicep over the ARM

An application programming interface (API) for controlling Azure resources is made available through the Azure Resource Manager (ARM) service. Using a REST API or Azure Resource Manager templates, it is possible to create, update, and remove Azure resources.

The infrastructure for an Azure solution may be defined using JSON or YAML files, which are referred to as ARM templates. They may be used to deploy and manage resources such as storage accounts, virtual networks, and virtual machines virtually. Infrastructure as code is what happens when ARM templates are used to automate the process of creating and managing Azure resources. This process may be automated using ARM templates (IaC).

I prefer to use Ansible

My helpful screenshot

I prefer to use Ansible

Read the “Technology’s choices” article to have more context

Technology’s choices | LinkedIn

Use of Ansible

Python is the language used to write Ansible. It is an open-source configuration management and automation application that may assist businesses in automating their operations and infrastructure. The product was created to aid corporations. Ansible was designed to be user-friendly by incorporating a straightforward, declarative programming language. It is not necessary for the target systems to have any agents or software installed, which makes it simple to handle a wide variety of systems and settings.

Configuration management, application deployment, and cloud provisioning are just some of the many activities that can be automated with the help of Ansible, which is used extensively by businesses of all sorts. In addition to this, it has widespread support, including a sizable and lively community of users and developers, as well as a diverse selection of plugins and modules that may be used to enhance its functionalities. It is impossible for me to determine whether Ansible is the “ideal” technology for your firm because I do not have enough information about your requirements and needs. Nevertheless, there are a few reasons why Ansible might be a suitable match for your business, including the following: Ansible was developed with the goal of being easy to use and understand. It has a straightforward, declarative language and a straightforward, minimalist style. Because of this, it is an excellent option for businesses that wish to automate their infrastructure and processes without investing a significant amount of time or money in acquiring a significant amount of specialised knowledge.

Use of Python

My helpful screenshot

Use of Python

Read the “Technology’s choices” article to have more context

Technology’s choices | LinkedIn

Use of Python

Python is a robust programming language that can be used for a variety of purposes and has gained a lot of popularity in the field of information technology. When applied within the context of Ansible and other command line tools, it possesses a number of benefits that make it an ideal candidate for usage as a scripting language.

Azure rather than AWS or GCP

My helpful screenshot

Azure rather than AWS or GCP

Read the “Technology’s choices” article to have more context

Technology’s choices | LinkedIn

Azure rather than AWS or GCP

Amazon Web Services (AWS) and Microsoft Azure are both examples of cloud computing platforms that provide a broad variety of services to its customers. These services include computation, storage, and networking. Deciding between the two might be challenging because each option has both advantages and disadvantages. When comparing Azure with AWS, here are some key considerations to keep in mind:

Pricing: A pay-as-you-go pricing plan is offered by both Azure and AWS; however, the actual charges may differ based on the services you make use of as well as the area in which you are located. Windows-based workloads are more likely to be completed at a lower cost on Azure, whereas Linux-based workloads are more likely to be completed at a lower cost on AWS. Because both Azure and AWS provide a number of price reductions and cost optimization tactics, it is vital to thoroughly assess your alternatives in order to identify the most cost-effective solution. It is crucial to note that both Azure and AWS offer these discounts and strategies.

YAML rather than JSON

My helpful screenshot

YAML rather than JSON

Read the “Technology’s choices” article to have more context Technology’s choices | LinkedIn YAML rather than JSON YAML, which stands for “YAML Ain’t Markup Language,” is a data serialisation language that is legible by humans and is widely used for configuration files. However, YAML may also be used for data storage. It is comparable to JSON in that it is a method for representing data structures; however, it is simpler to work with than JSON since it is more comprehensible to people and contains less unnecessary words. Indentation is used instead of curly brackets to denote structure in YAML, but curly brackets are used in JSON. This is one of the most significant differences between YAML and JSON. This results in an improvement in YAML’s versatility as well as its readability; nevertheless, it also makes it more prone to formatting errors.

Technology’s choices

My helpful screenshot

Technology’s choices

Technology’s choices refer to the decisions that businesses and organisations make when it comes to implementing various technology tools and systems. These decisions have a substantial bearing on the day-to-day operations of the company, as well as its levels of productivity and its ability to compete in the market. It is crucial to make educated choices regarding technology in today’s fast-paced business climate. These choices must enable the firm to stay ahead of the curve and maintain its relevance.

Cost, functionality, scalability, security, and ease of use are just few of the many aspects of a piece of technology that are important for organisations to take into consideration when making their selections. For instance, a company that wants to deploy a new customer relationship management (CRM) system may need to think about the price of the software, the features and capabilities that the system offers, and whether or not it can scale with the expansion of the organisation. In addition, the company must ensure that the CRM system is both safe and simple for employees to use in order to guarantee its widespread acceptance and continued success.

Hyper compute

My helpful screenshot

Hyper compute

The term “hyper compute” is most commonly used to refer to a high-performance computing system. Such a system typically makes use of innovative hardware and software technologies to handle massive and complicated datasets in a quick and effective manner. This method of computing is frequently utilised in fields such as engineering, scientific research, and other data-intensive endeavours that call for enormous amounts of processing power.

In most cases, hyper compute systems depend on specialised hardware such as graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs). These types of hardware are intended to perform complex calculations in parallel and are referred to by their respective acronyms. In addition, specialist software, such as parallel computing libraries and programming languages, may be utilised by these systems in order to enhance overall performance.

It is possible for hyper compute systems to provide significant advantages over traditional computing systems. These advantages can include the ability for researchers and data scientists to process massive amounts of data in a quick and effective manner, which can lead to faster insights and discoveries. Despite this, hypercompute systems are often rather expensive and required for a high level of specialised skill to operate successfully.

The Microsoft Azure Cloud offers a wide variety of cloud-based services and solutions, one of which is a comprehensive selection of high-performance computing options (also known as hyper compute). Azure includes a range of services and tools that are designed to facilitate high-performance computing workloads. Some examples of these services and products include Azure Virtual Machines with Graphics Processing Units (GPUs), Azure Batch, and Azure CycleCloud.

Access to high-performance computing clusters with specialised graphics processing units (GPUs) is provided by Azure Virtual Machines with GPUs. These clusters are ideal for data-intensive tasks that require large amounts of computing power, such as complex scientific simulations, deep learning, and other data-intensive activities.

Developers are given the ability to execute large-scale parallel and batch compute workloads in the cloud by utilising Azure Batch, which is a managed solution for high-performance computing applications such as parallel computing. Processing of compute-intensive tasks at scale is made possible with the help of a range of tools and functionalities made available through Azure Batch.

A cloud-based solution for high-performance computing (HPC) management, Azure CycleCloud streamlines the deployment, administration, and scaling of HPC workloads on Azure. It offers a platform that is scalable, safe, and cost-effective for executing high-performance computing applications, and it has support built-in for common HPC schedulers and applications.

The Microsoft Azure Cloud offers a wide variety of hyper computing solutions and services, which make it possible for users to conduct sophisticated and data-intensive tasks in an easy and effective manner. Both Hyper Compute and the architecture of personal computers (PCs) share some similarities while also exhibiting some key distinctions. A quick comparison is as follows:

Both Hyper Compute and PC architecture rely on CPUs to process data in their respective systems. The memory, sometimes known as RAM, is used by both of these systems to store data that the CPU may easily access. Both Hyper Compute and PCs often make use of either hard disc drives (HDDs) or solid-state drives (SSDs) for their storage components, however Hyper Compute frequently makes use of high-speed solid-state drives (SSDs).

The main difference between PC architecture and hypercomputing is that hypercomputing typically makes use of more advanced hardware components, such as specialised processors like GPUs or FPGAs, to perform complex calculations in parallel, whereas PC architecture primarily relies on a central processing unit (CPU).

While personal computers typically have less random access memory (RAM), hyper computing frequently makes use of large-scale memory systems such as high-bandwidth memory (HBM) or non-volatile memory express (NVMe).

In contrast to the PC architecture, which is primarily focused towards general-purpose computing and personal usage, the Hyper Compute architecture is built for high performance, scalability, and fault tolerance.

Whereas PC architecture often makes use of more general-purpose software like operating systems and productivity apps, Hyper Compute may make use of specialised software and programming languages, such as parallel computing libraries and languages.

The Hyper Compute architecture was developed specifically for high-performance computing; it contains specialised hardware and software components that are meant to assist the execution of large-scale, sophisticated data processing operations. On the other hand, personal computer architecture was developed for use in general-purpose computing, with an emphasis on software built for personal use and productivity programmes.

A diverse selection of high-performance computing jobs are suitable for usage with Hyper Compute. These are five examples of frequent applications using Hyper Compute:

Scientific simulations: You can utilise Hyper Compute to execute complex scientific simulations, like as weather forecasting, computational fluid dynamics, or molecular dynamics simulations. Some examples of these types of simulations include:

Hyper Compute can be used for training and deploying machine learning and deep learning models, both of which require significant computational resources to analyse massive amounts of data. These models can be trained on Hyper Compute, which can be used.

Big data processing: Hyper Compute can be used for processing and analysing big datasets, such as those created by social media platforms, internet of things (IoT) devices, or scientific studies.

Modeling of financial systems: Hyper Compute can be used to conduct complicated models and simulations of financial systems, such as Monte Carlo simulations or risk analysis.

Rendering and animation of videos: Hyper Compute is able to be utilised for the purpose of rendering videos and animations of a high quality, such as those utilised in the film and gaming industries.

Research in genomics: Hyper Compute is a tool that can be utilised for the purpose of analysing massive genomic datasets, such as those produced by gene sequencing technologies.

Calculations involving cryptography, such as those required by blockchain technology, can be carried out with the assistance of Hyper Compute.

Databases with a high level of performance Hyper Compute can be used to run databases with a high level of performance, such as databases that are utilised for real-time analytics or online transaction processing.

High-performance computing clusters Hyper Compute is a tool that may be used to construct and manage high-performance computing clusters. These clusters are utilised for jobs involving parallel processing and distributed computing.

IoT edge computing: Hyper Compute can be used for processing data at the edge of the network, such as in IoT devices or sensor networks, where it is vital to have low-latency and high-performance computing.

The phrases “cloud computing platform” and “hyper compute” are related, although each refers to a different component of cloud computing. The following is a list of some of the differences and similarities between these two concepts:

Differences: Scope: A cloud computing platform is a comprehensive suite of services that provides customers with resources such as computing power, storage, networking, and applications over the internet. These services are referred to collectively as “the cloud.” Hyper Compute, on the other hand, refers to a particular kind of computing architecture that was developed specifically for high-performance computing tasks, such as scientific simulations or the processing of large amounts of data.

A cloud computing platform will often consist of a number of different components and layers, such as infrastructure, platform, and software. On the other hand, Hyper Compute is a specialised design that concentrates on high-performance computing with a distributed computing approach, often making use of parallel processing. This is in contrast to the general approach of cloud computing, which prioritises centralised computing.

Cost: Cloud computing systems often offer a pay-as-you-go pricing model, which allows customers to pay only for the resources that they actually employ. On the other hand, due to the specialised hardware and software that is necessary for high-performance computing, Hyper Compute may have a higher price tag.

Similarities: Scalability: Both cloud computing platforms and Microsoft’s Hyper Compute have been intended to have a high degree of scalability. This enables customers to easily provision and scale their computing resources according to their specific requirements.

Accessibility Cloud computing systems and Microsoft Hyper Compute may both be accessed over the internet, making them available from any location in the world with an active internet connection.

Automation: Both cloud computing platforms and Hyper Compute may be automated with the help of technologies such as Ansible or Terraform. This enables users to provision and manage computing resources more quickly through the use of code.

Cloud computing platforms and Hyper Compute do have some things in common, but there are significant variations between the two in terms of their scope, architecture, and costs. Cloud computing platforms offer a comprehensive set of services that can be applied to a wide variety of different use cases, whereas Hyper Compute is a specialised architecture that has been developed specifically for high-performance computing applications.

Configurations, Feature Flags, Settings ...

My helpful screenshot

Configurations, Feature Flags, Settings …

How exactly should configurations, including feature flags, be maintained in the appropriate manner?

When it comes to maintaining configurations, there are numerous best practises, one of which is feature flags: Make use of version control: It is essential to have configurations saved in a version control system, such as Git, so that changes can be monitored, and rollbacks can be performed in the event that problems arise.

Use a central configuration management system: You may find it helpful to manage configurations across many environments and applications with the assistance of a centralised configuration management solution, such as a configuration management database.

Artificial Intelligence in Automation

My helpful screenshot

Artificial Intelligence in Automation

Is it possible to employ natural languages in order to make the DevOps machinery better?

Natural language processing (NLP), which is also known as linguistic processing, may, in fact, be utilised to make DevOps operations more efficient. The following is a list of some of the possible applications of natural language processing (NLP): NLP might be used to automatically categorise user requests or tickets and distribute them to the right group or people to be handled. NLP might be used to automatically categorise user requests or tickets and distribute them to the right group or people to be handled.

Pipeline

My helpful screenshot

Pipeline

A manufacturing pipeline is a set of devices that have been put together in order to provide consistent results. This leads one to believe that one device is capable of connecting to or communicating with another, which is evidence that the input and output are compatible with one another. This is the same fundamental concept behind the pipe command in Linux, which joins instructions by making the output of one programme the input for another programme. When discussing the DevOps pipeline, we refer to each device as an action or a combination of activities that comply to a predetermined standard operating procedure (SOP).

Inner-Source

My helpful screenshot

Inner-Source

The ideas and methods of open-source software development are adapted for usage within enterprises using a methodology known as “Inner Source,” which is a software development technique. It requires the establishment of open, cooperative development communities within an organisation, as well as the distribution of source code and other forms of intellectual property among these communities. The goals of Inner Source are to encourage internal cooperation and creativity inside a firm, as well as to expedite the production of high-quality software products. It is predicated on the theory that if different teams within an organisation share their source code and other internal resources, they will be able to build on each other’s work and make better use of the organization’s collective expertise to find solutions to difficult problems in a more timely and efficient manner. The openness and accountability of software development within an organisation may also be improved with the use of Inner Source, which can also contribute to the establishment of a culture that emphasises ongoing education and growth.

The Inner Source methodology for software development is predicated on a number of core ideas, including the following:

Collaboration: The success of Inner Source is contingent upon open and collaborative communication between the many teams and individuals that make up a company.

Some old photos

screenshot

My Collection

In my records I have digital photos since 2000, even that I have been doing photography for more time than that.

I just realize that I’m clicking on a camera, making the shutter work for over 2 decades. In any case I always try to improve bit by bit, photo by photo.

screenshot

The Counter

193k is the current number in my photo counter

Cloud genesis

My helpful screenshot

The beginning of a cloud strategy

All you need to get started.

Identity

Depending form your are coming from, your background will force you to thing is a certain way.

If you only know a hammer as a tool, all your problems will be handle like a nail.

Azure DevOps Factory

My helpful screenshot

Azure DevOps Factory and cloud industrialization

There is no way that you can make it without automation. The cloud industrialization (revolution) is based on controlled automation.

Keep In Touch

Feel free to contact us for any
project idea or collaboration

support@deixei.com

Zug, Switzerland