Skip to main content

7 posts tagged with "architecture"

View All Tags

· 12 min read

Application Programming Interfaces (APIs) come in various forms, with synchronous and asynchronous being the primary types. Synchronous protocols like Hypertext Transfer Protocol (HTTP) represent RESTful implementations, while asynchronous protocols like Server Sent Events (SSE) and MQ Telemetry Transport (MQTT) represent EVENTful implementations.

EVENTful message design can be categorized into three main types: notifications, objects, and streams. These messages can represent past actions (events) or future actions (commands).

A Brief History of Asynchronous APIs

Asynchronous APIs have been pivotal in the development of interactive and real-time web applications. The journey began with the establishment of standards like MQTT in 1999, which emerged as a lightweight messaging protocol ideal for low-bandwidth, high-latency environments. MQTT's publish-subscribe model was a departure from the synchronous HTTP protocol introduced in 1991, offering a more efficient way to handle real-time, bidirectional communication.

The term "EVENTful" APIs aptly encapsulates the nature of asynchronous communication, with EVENT standing for Efficient, Versatile, Nonblocking, and Timely. These characteristics are inherent to asynchronous APIs, which include message-based and event-driven architectures, providing a robust foundation for services that require real-time updates and interactions.

The concept of APIs has a storied history, dating back to the early days of computing as referenced in the 1951 book "The Preparation of Programs for an Electronic Digital Computer." Over the past 70 years, APIs have evolved dramatically, especially with the advent of web-based APIs. A significant milestone was the introduction of the Ajax pattern by Jesse James Garrett in 2005. Ajax leveraged asynchronous JavaScript and XML to enable web pages to dynamically fetch and display content without a full page reload, enhancing user experience and web application performance.

Ajax laid the groundwork for asynchronous web requests, primarily interacting with RESTful APIs. However, as the web evolved, so did the need for more efficient real-time communication methods. This led to the adoption of Server-Sent Events (SSE), a modern approach that allows servers to push updates to clients over a single, long-held HTTP connection. Unlike MQTT, which is a protocol designed for machine-to-machine communication, SSE is specifically tailored for web applications, providing a standardized way to stream updates from the server to the client.

RESTful Systems

In RESTful systems, the consumer initiates communication, making a request to which the service responds with the appropriate data. For example, a RESTful service might maintain a dataset of all completed trades for an investment firm. As each trade is executed and finalized, this dataset is updated accordingly. When a consumer needs to retrieve information on completed trades, it can request the service, which then responds with the relevant data for all the trades completed that day.

To illustrate this with a practical example, imagine an investment firm that needs to keep its traders and consumers informed about the status of their stock market trades. The firm could utilize a RESTful API to manage this data flow. The service would have access to a completed-trades dataset, exposing an endpoint that consumers could request to obtain the list of trades completed on a given day. A consumer application might request a web address such as https://investment.arbs.io/completed-trades to retrieve this information.

The service's HTTP response to such a request could be structured as follows:

HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 1234

{
"trades": [
{
"id": "1",
"stock": "AAPL",
"volume": "100",
"price": "150.00",
"status": "completed"
},
{
"id": "2",
"stock": "MSFT",
"volume": "50",
"price": "250.00",
"status": "completed"
}
// more trades here...
]
}

This JSON response provides a clear and structured representation of the completed trades, allowing consumers to easily parse and use the data as needed.

EVENTful Systems

EVENTful systems stand in contrast to RESTful systems by adopting a model where services actively "push" messages to consumers. These messages are sent based on predefined filters or subscriptions made by the consumer applications. Unlike RESTful systems, which adhere to a strict request-response pattern, EVENTful systems blur the traditional roles of consumers and services, allowing consumers to act as both producers and consumers of messages. This model is beneficial in scenarios where real-time updates are crucial, and the constant polling of a service for updates would be inefficient.

For instance, an EVENTful system could notify consumers immediately when a trade is executed. Instead of consumers repeatedly querying the service for the latest trade statuses, the service could send a message to the relevant consumers as soon as a trade is completed. This ensures that consumers receive updates in real-time without the need to make redundant requests to the service.

Let's reimagine the previous example. We have a service that sends a message each time a stock trade is executed. The message could be sent to all interested parties, such as traders, investment managers, or consumers, as soon as the trade is confirmed.

The structure of the JSON payload for an trade_executed event might look similar to the following:

{
"metadata": {
"message_source": "trade-execution-system",
"event-type": "trade-executed"
},
"message": {
"trade_id": "3335dc20",
"stock_symbol": "AAPL",
"volume": "100",
"price": "150.00",
"trader_id": "butsona",
"timestamp": "2023-03-13T12:00:00Z"
}
}

In this payload, the metadata provides context about the source and type of the message, while the message body contains the details of the executed trade, such as the trade ID, stock symbol, volume, price, trader ID, and execution timestamp. Each message represents a single trade execution, and it's up to the receiving applications to maintain a record of all trades if they need a historical log.

By leveraging EVENTful systems, the investment firm can minimize latency and resource consumption, ensuring that all parties have the most current information as soon as it's available.

Types of EVENTful Messages

EVENTful systems can utilize a variety of message types to communicate different kinds of information. While the terminology for these message types needs to be standardized, understanding their concepts is crucial for designing an effective event-driven architecture. The three primary types of messages are notifications, event-carried state, and event-source messages.

Message-Notifications

Notifications are the simplest form of EVENTful messages, often containing minimal information about an event and a point where more detailed information can be found. In the context of an investment firm, a notification might be sent out to inform a trader or consumer about a significant event, such as a large trade execution or a market movement that triggers an alert. The notification could include basic details and a URL to access a more comprehensive report or to perform a follow-up action.

For example, a notification message in a trading environment might look like this:

{
"metadata": {
"event-type": "trade-alert",
"message_source": "trading-platform"
},
"message": {
"trade_id": "f18121e4",
"timestamp": "2023-03-13T12:00:00Z",
"trade_url": "https://investment.arbs.io/trade-notifications/f18121e4"
}
}

A side-effect of using notifications is that it reduces the security requirements as the trade_url is protected, which means that only consumers with the correct permissions can request data, reducing the complexity of security.

Event-Carried State

Event-Carried State Transfer (ECS) messages are used when a significant block of data needs to be transmitted, such as a summary of a trade or a batch of trades. These messages carry all the necessary information to update a data store or to provide a comprehensive view of a trade without requiring additional queries to the system. This approach is beneficial for consolidating data from multiple sources and ensuring that the recipient has immediate access to the full context of the event.

An ECS message in a trading system might include details about the trade, the trader, and the current status, as follows:

{
"metadata": {
"event-type": "trade-summary",
"message_source": "trade-execution-system",
"timestamp": "2023-03-13T14:13:12Z"
},
"message": {
"trade": {
"trade_id": "f18121e4",
"stock_symbol": "AAPL",
"volume": "100",
"price": "150.00",
"trader_id": "butsona"
},
"status": {
"execution_time": "2023-03-13T13:13:13Z",
"trade_status": "completed"
}
}
}

Event-Source

Event-Source messages, or Delta messages, are designed to convey incremental changes or updates to data. They are useful for streaming real-time updates about the progress of a trade or a series of trades. These messages could inform consumers or internal systems about the progression of trade executions, price changes, or other relevant market events.

For example, a series of event-source messages might be sent to indicate the stages of trade execution:

[
{
"trade_id": "f18121e4",
"stage": "initiated",
"timestamp": "2023-03-13T10:11:12"
},
{
"trade_id": "f18121e4",
"stage": "executed",
"timestamp": "2023-03-13T10:14:13Z"
},
{
"trade_id": "f18121e4",
"stage": "confirmed",
"timestamp": "2023-03-13T10:14:15Z"
},
{
"trade_id": "f18121e4",
"stage": "settled",
"timestamp": "2023-03-13T10:16:15Z"
}
]

Each message in this array represents a discrete update to the trade's status, providing a granular view of the trade's lifecycle. By employing these different types of EVENTful messages, an investment firm can ensure efficient and timely communication within its trading ecosystem, facilitating quick decision-making and enhancing overall market responsiveness.

Events and Commands

In an EVENTful system within an investment firm, messages are a crucial component of the communication process. They are categorized based on purpose and timing: command messages are sent to trigger an action, while event messages are sent to notify that an action has occurred. Understanding and distinguishing between these two types of messages is fundamental when designing an EVENTful API system.

Identifying Events and Defining Commands

Defining Events

Identifying events is a critical step in the architecture of asynchronous APIs. It involves pinpointing all significant activities within the domain of stock market trading that should be tracked. Once identified, these activities are documented as events and incorporated into the API's implementation. Events in a trading environment might include a trader placing an order, a consumer logging into their account, or executing a trade.

In addition to these user-centric events, process-centric events occur within the trading system itself. These could involve the internal progression of a trade order through various stages, such as order validation, execution, and settlement. Events may also signal issues or exceptions, such as a trade rejection due to insufficient funds or a discrepancy in a trade order.

A corresponding message is designed to convey the necessary information for each identified event. The nature of the event will determine the type of message required—whether it's a simple notification, a more detailed object-style message, or a continuous stream of updates. For instance, a notification might be sent when a consumer logs in, while an object-style message might be used to convey the details of a new trade order, and streaming messages could provide real-time updates as a trade progresses through various stages.

Discoverability is essential, and cataloguing these events is crucial, as well as listing critical information such as the event name, message type, triggering conditions, example messages, and any additional notes. This documentation becomes a valuable resource for designers, developers, and architects involved in the system's development.

Here's an example of how events might be documented for an investment firm:

  • Trade Initiated: A trader orders a stock trade through the investment firm's platform.
  • Trade Executed: A trade order has been successfully executed on the stock exchange.
  • Trade Settled: The executed trade has been settled, and the securities and funds have been exchanged.

After identifying the significant past events, the next step is defining the commands representing future actions. Commands in a trading system include placing a trade order, modifying an existing order, or transferring funds. These commands are integral to the system's interactivity, allowing users to initiate actions that the trading system will process.

By meticulously identifying events and defining commands, an investment firm can ensure that its EVENTful system is comprehensive, responsive, and capable of handling the complexities of stock market trading. This approach enables the firm to maintain high service and efficiency, providing consumers with timely information and the ability to act swiftly in a dynamic market environment.

Defining Commands

In EVENTful systems, the distinction between events and commands is pivotal. Events indicate actions within the system, providing a historical record of transactions and activities. Conversely, commands instruct parts of the system to perform future actions, which may be executed immediately or after some delay, depending on the process involved.

For example, a command to execute a trade might be processed nearly instantaneously, reflecting a change in the status of a trade from pending to completed. However, specific commands, such as those involving complex payment processing or third-party bank verifications, may incur delays. In some cases, if there are issues with payment verification, the completion of a command might take significantly longer, ranging from minutes to hours.

The ability to define and initiate commands is essential for the functionality of an EVENTful system. Commands can encompass various actions, such as placing a new trade, modifying an existing trade, or cancelling a trade order. When designing such a system, it is crucial to meticulously describe all the commands necessary to support the desired activities within the trading environment.

The methodology for defining commands mirrors identifying events. A structured catalogue of each command detailing the command name, the type of message it will generate, an example of the message, and any supplementary notes. This documentation is a reference for the team members tasked with implementing the EVENTful system, ensuring clarity and consistency across the development process. This serves as a reference for consumers to understand how to interact with the solution. It is common to place commands behind RESTful APIs to improve security and improve discoverability (for example using the openapi-specifications).

For instance, the command list might include actions such as:

  • Place Trade: A command that initiates a new trade order within the trading platform.
  • Modify Trade: A command that alters an existing trade order's parameters.
  • Cancel Trade: A command that enables the cancellation of a previously placed trade order.

Upon finalizing the list of commands, the foundational elements of an EVENTful system are established, encompassing messages, events, and commands.

Summary

We explore the fundamental aspects of an EVENTful API system, focusing on transmitting messages between machines based on past events and future commands. The three types of Message-Notifications, Event-Carried State, and Event-Source have been discussed, along with guidance on their appropriate use. Additionally, the processes for identifying events and defining commands have been outlined, with the recommendation to document these elements for ease of collaboration and implementation. This structured approach is instrumental in building an efficient and effective EVENTful system, facilitating real-time communication and action in an ever growing fast-paced world.

· 9 min read

In the digital age, cybersecurity is a concern for everyone. From individuals to large corporations, understanding the threats we face is the first step towards protecting ourselves. In this blog post, we'll explore key points of cybersecurity, focusing on business logic attacks, online fraud, malware, and the evolution of firewall technology.

Business Logic Attacks: The Devil is in the Design

Business logic attacks are a unique breed of software vulnerabilities. Unlike common bugs that can be patched, these attacks exploit core design flaws in an application. These flaws could be anything from predictable user names to weak password policies.

For instance, if a website uses a predictable pattern for user identifiers, like firstname.lastname@company.com, an attacker can use this information to perform a dictionary attack on an account. Similarly, if a website's password recovery questions are easily researchable (like the name of your high school published on LinkedIn), an attacker can use this information to gain access to your account.

The best way to prevent these attacks is to address security in the design phase of software development. By incorporating security stories into the development process and engaging information security teams early on, developers can identify and address potential vulnerabilities before they become a problem.

Online Fraud: The Ever-Evolving Threat

Online fraud is not a new threat, but it's one that's constantly evolving. With over 90 billion e-commerce transactions made in 2016 alone, the potential for fraud is enormous.

Attackers are now using machine learning and artificial intelligence to adapt and communicate with victims automatically. They're also using social engineering techniques, like phishing and spearphishing, to trick users into giving up their sensitive information.

Malware: The Silent Threat

Malware is another major cybersecurity threat. From viruses and worms to ransomware, malware can cause significant damage to a system. One of the most concerning trends in malware is the ability to change data to different values altogether surgically.

Imagine if an attacker could change a stoplight at a major intersection from red to green on-demand or disable your car's brakes while you're driving down the freeway. With the rise of the Internet of Things (IoT), these scenarios are becoming increasingly possible.

Evolution of Firewall Technology

To combat these threats, firewall technology has evolved significantly over the years. From traditional Intrusion Detection System (IDS) and Intrusion Prevention System (IPS) technology to Next-Generation Firewall (NGFW) technology, these systems are designed to protect our networks and systems from attacks.

However, these systems are not infallible. Attackers can use various techniques to evade detection, like packet fragmentation, encoding, and whitespace diversity.

This is where Web Application Firewall (WAF) technology comes in. WAFs are designed to protect HTTP applications by analyzing transactions and preventing malicious traffic from reaching the application. They can detect and address application layer attacks, like SQL injection and Cross-Site Scripting (XSS), and provide URL, parameter, cookie, and form protection for applications.

Web Application Firewalls (WAFs) are a crucial part of any cybersecurity strategy. They serve as the first line of defence for applications, detecting and mitigating a wide range of threats. However, they could be more foolproof and should be deployed alongside other complementary technologies for a robust defence-in-depth strategy. Let's dive into the world of WAFs and understand their capabilities, how they work, and the emerging trends in this space.

Core WAF Capabilities

WAFs are designed to detect and mitigate threats by analyzing data structures rather than relying on exact dataset matches. This is achieved through the use of heuristics and rulesets. These rulesets can be configured to consider various information such as the country of origin, length of parts of the request, potentially malicious SQL code, and strings that appear in requests.

WAFs and XSS Attacks

Cross-site scripting (XSS) attacks are a significant risk to businesses and consumers. Developers can prevent these attacks by validating user input and using output encoding. However, even with these best practices, vulnerabilities can still exist due to third-party libraries or software development processes that you don't control.

An attacker first needs to find an XSS vulnerability before they can exploit it. They can use tools like web application vulnerability scanners and fuzzers to find these vulnerabilities automatically. Once a vulnerability is found, the attacker can inject malicious scripts into the web application.

WAFs and Session Attacks

Session tampering is a significant threat that can allow attackers to manipulate session data and potentially gain unauthorized access to a system. WAFs can help mitigate these attacks by digitally signing artefacts such as cookies and ensuring users are communicating only with servers that have valid digital certificates.

Minimizing WAF Performance Impact

WAFs are deployed inline, meaning they are directly in the line of traffic. Therefore, it's crucial to ensure that they are engineered, designed, and deployed properly to avoid introducing incremental latency. Modern WAFs should be equipped to match or outpace the speeds of the Layer 2-3 devices that feed them.

WAF High-Availability Architecture

High availability (HA) is a critical aspect of any WAF solution. It's important that the components within the appliance are fault-tolerant from the outset. After addressing HA within the device itself, HA across devices is required. WAF deployments should support multiple horizontally scheduled devices to provide for HA and allow for sufficient horizontal scaling to accommodate any required network throughput.

Emergent WAF Capabilities

As technologies advance, attackers continue to take advantage of new capabilities to advance their agendas. WAF vendors are starting to add integrations with adjacent solutions and incorporate WAF technology into existing technology trends such as DevOps, Security Information and Event Management Strategy, containerization, cloud, and artificial intelligence.

WAFs Authentication Capabilities

WAF solutions allow you to implement strong two-factor authentication on any website or application without integration, coding, or software changes. This can help protect administrative access, secure remote access to corporate web applications, and restrict access to a particular web page.

Detecting and Addressing WAF/IDS Evasion Techniques

When evaluating WAF technologies, it's important to test for core attack vector coverage and how well the solution addresses WAF evasion techniques. Some examples of WAF evasion techniques include multiparameter vectors, Unicode encoding, invalid characters, SQL comments, redundant whitespace, and various encoding techniques for XSS and Directory Traversal.

Virtual Patching

Virtual patching is a quick development and short-term implementation of a security policy intended to prevent an exploit from being successfully executed against a vulnerable target. It can help protect applications without modifying an application's actual source code. Virtual patches need to be installed only on the WAFs, not on every vulnerable device.

WAFs are a crucial part of any cybersecurity strategy. They offer a robust defence against a wide range of threats and are continually evolving to keep up with emerging trends and technologies. However, they could be more foolproof and should be deployed alongside other complementary technologies for a robust defence-in-depth strategy.

WAF Components

WAFs are an essential part of any cybersecurity strategy. But they work with others. They're part of a team that includes other technologies like API Gateways, Bot Management and Mitigation systems, Runtime Application Self-Protection (RASP), Content Delivery Networks (CDNs), Data Loss Prevention (DLP) solutions, and Data Masking and Redaction tools. Each of these technologies plays a unique role in securing your applications and data.

API Gateways

API Gateways are like the bouncers of your application. They control who gets in and who doesn't. They protect your internal APIs and allow them to be securely published to external consumers. They can also do protocol translation, meaning they can receive a REST request from the internet and translate that into a SOAP request for internal services.

Bot Management and Mitigation

Bots are like the little minions of the internet. Some are good, like search engine bots that index web pages. But some are bad, like bots that generate mass login attempts to verify the validity of stolen username and password pairs. WAFs can help deal with certain types of bots, but for more advanced bot threats, you might need a specialized bot mitigation and defence device.

Runtime Application Self-Protection (RASP)

RASP is like a bodyguard that's always with your application. It's embedded into an application's runtime and can respond to runtime attacks by replacing tampered code with original code, safely exiting or terminating an app after a runtime attack has been identified, or sending alerts to monitoring systems.

Content Delivery Networks (CDNs) and DDoS Attacks

CDNs are like the delivery trucks of the internet. They distribute cached content and access controls closer to the users that consume them. They can also help protect against DDoS attacks by absorbing the attacks and minimizing the performance impact on the actual web servers.

Data Loss Prevention (DLP)

DLP solutions are like the security cameras of your data. They ensure that sensitive data doesn't leak out of corporate boundaries. Modern DLP solutions expand beyond the perimeter and integrate with cloud providers and directly with user devices.

Data Masking and Redaction

Data Masking and Redaction tools are like the blurring effect on a video. They conceal data or redact it so that only those who have a need to know can see the full dataset.

WAF Deployment Models

WAFs can be deployed in various ways, including on-premises, native cloud, cloud-virtual, inline reverse proxy, transparent proxy/network bridge, out-of-band, multitenancy, single tenancy, software appliance-based, and hybrid. The choice of deployment model depends on your specific needs and environment.

Designing a Comprehensive Network Security Solution

When designing a comprehensive network security solution, it's important to consider all the components and how they work together. This includes WAFs, API Gateways, Bot Management and Mitigation systems, RASP, CDNs, DLP solutions, and Data Masking and Redaction tools. Each of these technologies plays a unique role in securing your applications and data.

Summary

Web Application Firewalls are an essential part of any cybersecurity strategy. But they work with others. They're part of a team that includes other technologies that together provide a robust defence-in-depth strategy. So, when you're planning your cybersecurity strategy, make sure to consider all these components and how they can work together to secure your applications and data.

· 7 min read

Web Application Firewalls (WAFs) are the most advanced firewall capabilities in the industry. They've evolved from traditional firewalls focusing on network layer traffic to sophisticated systems that can understand and track session state and make sense of what's happening at the application layer.

The Need for WAFs

As cyber-attacks become more advanced, climbing up the ladder of the Open Systems Interconnection model, there's a growing need for a different kind of inspection. This inspection should not only understand and make sense of network traffic but also be able to parse the "good" from the "bad" traffic. This is where WAFs come in.

WAFs can protect your systems through several means. One such method is signature-based detection, where a known attack signature has been documented, and the WAF parses the traffic, looking for a pattern match. Another method involves the application of behaviour analysis and profiling. Advanced WAFs can conduct a behavioural baseline to construct a profile and look for deviations relative to that profile.

The Changing Landscape of Cyber Attacks

In the past, attacks on applications and infrastructure were carried out by individual hackers manually. However, to become more efficient and drive more results, malicious operators and organizations have largely automated and industrialized attacks through the use of distributed botnets.

The Evolution of Application Development

Applications and their development have undergone significant changes with the advent of cloud deployments, container technologies, and microservices. Developers often reuse other people's code to achieve outcomes and functionality for their applications. This has led to an increase in the use of third-party libraries during the application development process.

Attackers are aware of this and are looking to exploit vulnerabilities found in commonly used third-party libraries such as OpenSSL. This means that the number of well-known vulnerabilities multiplies exponentially the more they are used in the development process. WAFs and adjacent technologies can help provide gap protection in the form of signature-based and behaviour-based identification and blocking. This can help address not only known vulnerabilities and threats but also zero-day threats and vulnerabilities.

Understanding WAF Functionality

The Open Web Application Security Project (OWASP) Top 10, which outlines the most prevalent vulnerabilities found in applications and walks through the means of mitigation by way of compensating controls.

Adjacent WAF technologies and functionality include:

  • API gateways
  • Bot management and mitigation
  • Runtime Application Self-Protection (RASP)
  • Distributed Denial of Service (DDoS) protection
  • Content Delivery Networks (CDNs)
  • Data Loss Prevention (DLP)
  • Data Masking and Redaction
  • Security Information and Event Management (SIEMs)
  • Security orchestration and incident response automation

By understanding the latest developments in WAF technology, you can better incorporate and integrate it with your existing and planned technology deployments, including cloud, on-premises, and hybrid topologies.

The Rise of Botnets

First, let's talk about botnets. These are networks of compromised computers controlled by hackers. Initially, botnets were used mainly for Distributed Denial of Service (DDoS) attacks. However, hackers have now industrialized botnets to automate attacks for different purposes. They can grow the size of the botnet, execute DDoS attacks, or even carry out surgical strikes against websites and applications.

What's more, hackers have started offering botnets-as-a-service on the dark web. This means that attackers can rent botnets to execute their own campaigns. It's a structured, albeit illegitimate, business model that's making cyber attacks more efficient and widespread.

The Complexity of Code and the Use of Third-Party Libraries

The past decade has seen an explosion of open-source code. This has given developers a plethora of choices about which libraries to use to minimize development effort. However, this has also opened up new avenues for attackers.

Attackers are constantly looking for vulnerabilities in commonly used libraries like OpenSSL. A vulnerability in such a common core security library can have serious security implications. Remember the Heartbleed Bug? It was a serious vulnerability in the OpenSSL cryptographic software library that allowed for the theft of information protected by SSL and Transport Layer Security (TLS) protocols.

The Advent of Microservices

Another trend in the development world is the use of microservices. These are small, discrete services that allow development teams to deploy new functionality iteratively and in quick, small sprints. However, each microservice potentially represents its own unique attack surface that can be exploited.

Developers often incorporate third-party libraries in these microservices as needed. This can introduce more individual attack surfaces and vulnerable third-party libraries, exposing your organization to additional risk.

The Challenge of Secure Application Development

Application development is like the Wild West. Developers have full freedom to pull third-party libraries from anywhere on the web. But what if they're using versions of these libraries that have been modified with backdoors or other malicious code? Or what if they're using older versions with known vulnerabilities?

The good news is that with the advent of DevOps, the ability to lock down source libraries through programmatically managed pipelines and build processes has greatly increased. However, many development teams are still in the early phases of adopting mature DevOps deployments. In the meantime, this needs to be balanced with compensating controls like regular vulnerability scanning or virtual patching and attack detection by using Web Application Firewalls (WAFs).

The Threat of Compromised Credentials

It's estimated that 50% of cyberattacks involve compromised credentials. The system of using usernames and passwords to gain access to websites is fundamentally broken, but it continues to perpetuate. For attackers, using compromised credentials is the simplest way in the front door. They want to expend the least amount of effort.

Compromised Accounts: The Dark Side of the Web

When we talk about compromised accounts, we're usually referring to end-user accounts. These are the accounts that everyday users like you and me have with various online services. When a major service like Yahoo! gets hacked, the stolen credentials can be used in what's known as credential stuffing attacks.

In these attacks, bots are configured to replace the variables of username and password with the compromised data. These bots can then attempt to gain access to other services using these stolen credentials. The scary part? These repositories of hacked usernames and passwords can be found on the dark web and sold to anyone willing to pay in Bitcoin. And they're not just sold once - they can be resold over and over again.

Sensitive and Privileged Accounts: A Hacker's Goldmine

Another type of account that can be compromised is a sensitive or privileged account. These are accounts that have administrative privileges over operating systems, databases, and network devices. If a hacker can gain access to these accounts, they can gain full control of a system or network.

A hacker might do this by escalating their privileges. For example, if a hacker gains access to a non-privileged account, they can then attempt to escalate their privileges by exploiting vulnerabilities in the system. This could involve identifying vulnerable software versions, researching known exploits, and then using these exploits to gain higher-level access.

Types of Attacks: Understanding the Threat Landscape

Now that we've covered the types of accounts that can be compromised let's move on to the types of attacks that can occur. For this, we'll use the Open Web Application Security Project's (OWASP) Top 10 list, which is the industry standard for categorizing application-level vulnerabilities and attacks.

The OWASP Top 10 includes:

  1. Injection
  2. Broken Authentication
  3. Sensitive Data Exposure
  4. XML External Entities (XXE)
  5. Broken Access Control
  6. Security Misconfiguration
  7. Cross-Site Scripting (XSS)
  8. Insecure Deserialization
  9. Using Components with Known Vulnerabilities
  10. Insufficient Logging and Monitoring

Each of these attacks represents a different way that a hacker can exploit vulnerabilities in an application or system. By understanding these attacks, we can better protect ourselves and our systems.

Summary

In the world of cybersecurity, knowledge is power. By understanding the types of accounts that can be compromised and the types of attacks that can occur, we can better protect ourselves and our systems. Remember, the first step towards protection is understanding the threats we face. Stay safe out there!

· 3 min read

A New Approach to Software Integration. Flow is a concept in networked software integration that is event-driven, loosely coupled, highly adaptable and extensible. It is defined by standard interfaces and protocols that enable integration with minimal conflict and toil. Although there isn't a universally agreed-upon standard for flow today, it's poised to drive significant changes in integrating businesses and other institutions.

Key Properties of Flow

Flow is the movement of information between disparate software applications and services. It is characterised by the following:

  • Consumers (or their agents) request streams from producers through self-service interfaces.
  • Producers (or their agents) choose which requests to accept or reject.
  • Once a connection is established, consumers do not need to request information actively—it is automatically pushed to them as it is available.
  • Producers (or their agents) maintain control of the transmission of relevant information—i.e., what information to transmit, when, and to whom.
  • Information is transmitted and received over standard network protocols—including to-be-determined protocols precisely aligned with flow mechanics.

Flow and Integration

Flow and event-driven architectures are exciting as they are crucial in our economic system's evolution. We are quickly digitising and automating the exchanges of value—information, money, and so on—that constitute our economy. However, most integrations we execute across organisational boundaries today are not in real-time, and they require mostly proprietary formats and protocols to complete.

The World Wide Flow (WWF)

The global activity graph is the World Wide Flow (WWF). The WWF promises to democratise the distribution of activity data and create a platform on which new products or services can be discovered through trial and error at low cost. The WWF promises to enable automation at scales ranging from individuals to global corporations or even global geopolitical systems.

Flow and Event-Driven Architecture

Event-driven architecture (EDA) is the set of software architecture patterns in which systems utilise events to complete tasks. EDA is a loosely coupled method of acquiring data where and when it is valid, like APIs. However, its passive nature eliminates many time- and resource-consuming aspects of receiving "real-time" data via APIs. EDA provides a more composable and evolutionary approach to building event and data streams.

The Ancestors of Flow

There are plenty of examples of real-time data passed between organisations today, but most don't flow, as defined here. Generally, existing streaming interfaces are built around proprietary interfaces. The producer typically designs the APIs and other interfaces for their purposes only or is being utilised for a specific use case, such as industrial systems.

Code and Flow

If we are looking at the flow basis, we must look at another critical trendsetting the stage. "Serverless" programming depends on the system's flow of events and data. The increased adoption of managed queuing technologies such as Amazon Managed Streaming for Apache Kafka (Amazon MSK) or Google Cloud Pub/Sub, combined with the rapid growth of functions as a service (FaaS) code packaging and execution, is a valid signal that flow is already in its infancy.

In conclusion, flow is a promising concept that could revolutionise integrating software and systems. Flow could unlock a new level of automation and efficiency in our digital economy by enabling real-time, event-driven communication between disparate systems.

· 3 min read

OpenAPI is a specification for designing and describing RESTful APIs. The OpenAPI extension, also known as the OpenAPI specification extension, is a way to add additional information to an API definition. In OpenAPI specifications, extensions allow adding vendor-specific or custom fields to a specification document. They are defined as a field in the specification with the key starting with "x-", for example, "x-vendor-field". The contents and meaning of these extensions are specific to the vendor or tool using them and are not part of the OpenAPI specification.

OpenAPI extensions can help designers in several ways:

  • Adding custom fields: Extensions allow designers to add custom fields to the OpenAPI specification, which can provide additional information about the API and enhance the design.

  • Enhancing tool support: By using extensions, designers can add functionality specific to their tools or workflows and improve the tool support for their API design.

  • Improving collaboration: Extensions can be used to share additional information between different teams and stakeholders involved in the API design process, enhancing collaboration and communication.

  • Supporting vendor-specific features: Extensions can support vendor-specific features, such as specific security protocols or data formats. The core OpenAPI specification may not support that.

  • Streamlining development: By using extensions, designers can simplify the development process and ensure that all necessary information is included in the specification, reducing the risk of miscommunication or misunderstandings.

x-badges

The "x-badges" extension in OpenAPI specifications allows designers to display badges, or small graphical elements, in the API documentation. These badges can be used to provide additional information about the API or to highlight specific features.

Here are some of the ways that "x-badges" can help with OpenAPI specifications:

  • Showing API status: Badges can be used to indicate the status of an API, such as "beta" or "deprecated." This information helps developers understand the current state of the API and whether it is appropriate to use.

  • Highlighting important information: Badges can highlight important information about the API, such as the version number, release date, or supported platforms. This information can be displayed prominently in the API documentation, making it easier for developers to find.

  • Providing visual cues: Badges can give visual cues that draw attention to specific information in the API documentation. This makes it easier for developers to find the information they need quickly.

Overall, the "x-badges" extension in OpenAPI specifications provides a simple and effective way to display additional information about the API. By using badges, designers can improve the usability and clarity of their API documentation.

Docusaurus Plushie

x-code-sample

Providing sample code: The "x-code-sample" extension can be used to include sample code snippets for different programming languages. This can help developers understand how to use the API and make it easier for them to get started.

x-client-id

Defining authentication information: The "x-client-id" and "x-client-secret" extensions can be used to define the client ID and secret required for authentication with the API. This can help ensure that developers have the information they need to properly use the API.

x-pkce-only

Enforcing security measures: The "x-pkce-only" extension can be used to enforce the use of Proof Key for Code Exchange (PKCE) in OAuth 2.0. This is a security measure that helps prevent unauthorized access to an API.

Summary

In summary, the OpenAPI extension allows designers to provide additional information and constraints to the API definition, making it easier for developers to understand and use the API. By using extensions, designers can improve the usability and security of their APIs.