webassembly

The Burgeoning Market for WebAssembly Observability: Powering the Future of IoT and Serverless

WebAssembly is rapidly transforming the landscape of both IoT and serverless computing. Its ability to deliver near-native performance, enhanced security, and cross-platform compatibility is driving its adoption across diverse industries. As these Wasm deployments scale, the need for robust observability solutions becomes paramount.

Why WebAssembly Will Succeed

Wasm’s growing success in IoT and serverless can be attributed to several key factors:

  • Performance: Wasm provides near-native execution speed. This is crucial for performance-sensitive applications in IoT and serverless environments where efficiency directly impacts responsiveness and cost.
  • Security: Wasm operates in a sandboxed environment, offering a strong security boundary. This isolation is particularly important for IoT devices, often deployed in potentially vulnerable locations, and for multi-tenant serverless platforms.
  • Portability: Wasm’s “write once, run anywhere” promise is a significant advantage. Developers can build applications that run consistently across diverse IoT devices and various serverless FaaS (Function as a Service) platforms without extensive modifications.
  • Efficiency: Wasm’s compact binary format and efficient execution model lead to lower resource consumption (CPU, memory). This is critical for resource-constrained IoT devices and contributes to cost-effectiveness in pay-per-use serverless models.

The Critical Need for WASM Observability

As Wasm-based applications become more complex and distributed, particularly in IoT and serverless architectures, robust observability is no longer a luxury but a necessity. It is essential for:

  • Performance Monitoring: Continuously tracking and analyzing the execution speed and resource usage of Wasm modules to ensure they deliver their promised performance benefits in real-world conditions.
  • Troubleshooting: Quickly identifying the root cause of issues, bugs, or performance bottlenecks within distributed Wasm systems and across the interactions between Wasm modules and their host environments.
  • Security Monitoring: Gaining visibility into the behavior of Wasm instances to detect anomalous activities or potential security threats within the sandbox.
  • Resource Optimization: Understanding how Wasm functions consume resources in serverless environments to optimize costs and ensure efficient scaling. For IoT, it’s vital for managing limited device resources effectively.

Market Size and Projections

While specific, dedicated market size projections for Wasm observability are still an emerging area of research, the growth trends in related markets clearly indicate a significant future opportunity:

  • IoT Device Management Market: This market is projected to grow substantially, with some reports indicating a potential size of USD 40.15 billion by 2032 with a CAGR of over 30%. The increasing number and diversity of connected devices necessitate robust management and monitoring, creating a direct need for Wasm observability as Wasm becomes more prevalent on these devices.
  • Serverless Computing Market: The serverless market is also experiencing rapid expansion, with projections suggesting it could reach USD 50-60 billion by 2030, with strong CAGRs. As more applications adopt serverless architectures, the demand for visibility into these ephemeral and distributed functions will surge, driving the need for Wasm-native observability.
  • Observability Tools and Platforms Market: The broader market for observability tools across all technologies is already significant, valued at several billion dollars and projected for continued growth at a healthy CAGR. This indicates a widespread recognition of the value of deep system visibility, a trend that will naturally encompass Wasm deployments.

The absence of specific “Wasm observability market size” figures doesn’t diminish the potential; rather, it suggests it’s an integral, growing component within these larger, rapidly expanding markets for IoT, serverless computing, and general observability.

The Future of Wasm Observability

The future of Wasm observability is bright and will likely involve:

  • Specialized Wasm Runtimes with Built-in Observability: Runtimes designed with observability features from the ground up, providing detailed metrics, tracing, and logging of Wasm execution.
  • Integration with Cloud-Native Ecosystems: Seamless integration of Wasm observability data into existing cloud-native monitoring and analysis platforms.
  • Enhanced Edge Observability: Tools and techniques specifically designed for collecting and processing observability data from Wasm running on resource-constrained edge and IoT devices.
  • Standardization: Development of standards for Wasm observability to ensure interoperability between different runtimes and tools.

As Wasm continues its trajectory towards becoming a ubiquitous runtime for distributed systems, the market for solutions that provide deep visibility into its execution will grow in tandem, becoming an essential part of the modern software development and deployment landscape.

The Burgeoning Market for WebAssembly Observability: Powering the Future of IoT and Serverless Read More »

Linux Kernel 6.14 and 6.15: Powering the Future of eBPF, WASM, and Serverless

The Linux kernel continues to evolve at a rapid pace, and the 6.14 and 6.15 releases bring a host of new features that are particularly relevant to cutting-edge technologies like eBPF, WebAssembly (Wasm), and serverless computing. Let’s delve into the key highlights from these releases.

Linux 6.14: Strengthening the Foundations

  • Improved Security: Kernel hardening and various security enhancements provide a more robust base for all applications, including those leveraging eBPF.
  • Enhanced Hardware Support: New drivers and architecture updates ensure compatibility with the latest hardware, which is crucial for serverless deployments.
  • General Performance Improvements: Optimizations across the kernel contribute to better performance for all workloads.

Although the article doesn’t specifically call out features directly beneficial to eBPF, WASm, or serverless, any improvements to core kernel functionality are generally advantageous for these technologies.

Linux 6.15: A Leap Forward

Linux 6.15 builds upon the foundation of 6.14, offering more concrete advancements in areas that directly impact eBPF, WASm, and serverless:

  • Expanded File Name Support: A significant change in 6.15 is the increased filename length limit for user-space filesystems (FUSE), jumping from 1024 to 4096 bytes. This enhancement, while seemingly minor, can be beneficial in complex serverless environments where dealing with uniquely named resources or containers is common.
  • Improved Storage and Filesystem Capabilities: EXT4 file system hardening and various enhancements to F2FS contribute to improved reliability and performance. Furthermore, Btrfs sees improvements with Zstandard compression.
  • Hardware Advancements: Support for newer AMD and Intel processors translates directly to improved performance for serverless functions, especially those running on bare metal or in virtualized environments.
  • Networking Improvements: While details need more investigation, any improvements to networking within the kernel directly enhance the performance and reliability of serverless functions and containerized applications, a key component of WASm and eBPF deployment scenarios.
  • Security Updates: Linux 6.15 incorporates vital security measures, including broader mitigation strategies against different types of attacks. This enhanced overall security is vital for eBPF and WASm applications, and for serverless environments.

Implications for eBPF Development

eBPF thrives on a stable and feature-rich kernel. The performance and security improvements in both 6.14 and 6.15 make them excellent choices for eBPF development and deployment. Specific features that are likely beneficial include the stability updates and enhanced hardware compatibility.

Benefits for WASM

WebAssembly runtimes, particularly those running outside of the browser (e.g., in serverless functions or on the edge), benefit from a robust kernel. The general performance increases and security enhancements in these kernel releases are beneficial.

Impact on Serverless Computing

Serverless platforms rely heavily on kernel features for resource management, isolation, and networking. The improvements in 6.14 and especially the enhancements in networking, storage and the larger filename support in 6.15 can translate to more efficient and scalable serverless deployments. The ability to handle larger filenames is surprisingly relevant in complex serverless environments where container names and unique resource identifiers are common.

Conclusion

Linux 6.14 and 6.15 provide a solid foundation for the continued growth of eBPF, WASm, and serverless computing. While some features offer indirect benefits by improving overall kernel stability and performance, other changes, especially related to storage, networking, and the expanded filename lengths directly address the needs of these demanding technologies. As these technologies continue to mature, the ongoing evolution of the Linux kernel will be critical to their success.

Linux Kernel 6.14 and 6.15: Powering the Future of eBPF, WASM, and Serverless Read More »

Python and WebAssembly: A Powerful Combination

WebAssembly is a binary instruction format that enables near-native performance in web browsers. While traditionally languages like C, C++, and Rust have been compiled to WebAssembly, recent advancements have made it possible to run Python code within a WebAssembly environment. This blog post explores how to use Python with WebAssembly, showcasing code examples and real-world applications.

Why Use Python with WebAssembly?

  • Performance: Run computationally intensive Python code at near-native speeds in the browser.
  • Cross-Platform Compatibility: Execute Python code seamlessly across different operating systems and devices.
  • Security: WebAssembly runs in a sandboxed environment, enhancing the security of web applications.
  • Leverage Existing Python Libraries: Utilize popular Python libraries for data manipulation, machine learning, and more, directly in the browser.

Tools and Technologies

  • Pyodide: A Python distribution for the browser and Node.js based on WebAssembly. It allows installing and running Python packages with micropip.
  • Wasmtime: A standalone WebAssembly runtime that can execute WebAssembly modules outside the browser.

Code Examples

Basic Python in the Browser with Pyodide:

<!DOCTYPE html>
<html>
<head>
<script src="https://cdn.jsdelivr.net/pyodide/v0.24.1/pyodide.js"></script>
</head>
<body>
<script>
async function main() {
let pyodide = await loadPyodide();
console.log(pyodide.runPython(`
import sys
print(sys.version)
print("Hello, Pyodide!")
`));
}
main();
</script>
</body>
</html>

Using External Python Packages:

from pyodide.http
import open_url
import pandas as pd
import matplotlib.pyplot as plt import io

# Fetch a CSV file from a URL
url = "https://raw.githubusercontent.com/jerry-shen/Pandas_Tutorial/master/sales.csv"
response = await open_url(url) csv_content = await response.text()

# Read the CSV into a Pandas DataFrame
df = pd.read_csv(io.StringIO(csv_content))

# Calculate total sales per product
product_sales = df.groupby('Product')['SalePrice'].sum()

# Create a bar chart product_sales.plot(kind='bar')
plt.title('Total Sales per Product')
plt.xlabel('Product')
plt.ylabel('Total Sales')
plt.show()

    Real-World Applications

    • High-Performance Web Applications: Python combined with WebAssembly can handle computationally intensive tasks in web applications, such as simulations, data analytics, and real-time rendering. Examples include online code editors, image editors, and 3D modeling tools.
    • Data Visualization Dashboards: Interactive data dashboards powered by libraries like Matplotlib or Seaborn can be made faster and more responsive using WebAssembly.
    • Machine Learning in the Browser: TensorFlow.js uses WebAssembly to accelerate the performance of its models in the browser, enabling real-time image and speech recognition tasks.
    • Cross-Platform Development: Build applications that run consistently on mobile, desktop, and the browser using Python and WebAssembly.

    Deployment Advantages

    WebAssembly’s sandboxed environment limits access to underlying hardware and system resources, enhancing the security of web applications. This is particularly important for applications handling sensitive data, such as those in health, finance, and e-commerce.

    Conclusion

    The combination of Python and WebAssembly opens up new possibilities for web development. By leveraging WebAssembly’s performance and security benefits, developers can create powerful and efficient web applications using the flexibility and rich ecosystem of Python.

    Would you like to explore any of these topics in more detail, or perhaps generate an image related to Python and WebAssembly?

    Python and WebAssembly: A Powerful Combination Read More »

    Understanding the WebAssembly ABI: Navigating the wasm32-unknown-unknown and the Future

    WebAssembly (Wasm) has emerged as a powerful, portable, and secure binary format for the web and beyond. As developers increasingly compile code from languages like C, C++, and Rust to Wasm, understanding the Application Binary Interface (ABI) becomes crucial for ensuring interoperability and correct execution. This post delves into the intricacies of the Wasm ABI, with a specific focus on the wasm32-unknown-unknown target and the recent developments affecting the C ABI.

    What is an ABI and Why Does it Matter for Wasm?

    An ABI, or Application Binary Interface, is essentially a contract between different parts of a program or between a program and the operating system. It defines low-level details such as how functions are called, how data structures are laid out in memory, and how data types are represented when passed between modules.

    In the context of Wasm, the ABI dictates how Wasm modules interact with each other and with the host environment they run in (e.g., a web browser, a server-side runtime). Unlike traditional native ABIs that deal with hardware registers, Wasm operates on a stack-based virtual machine with a defined set of value types (i32, i64, f32, f64). A Wasm ABI defines how higher-level language constructs (like C structs or Rust enums) are translated into these Wasm value types and how they are passed as function arguments and return values.

    Understanding the Wasm ABI is vital for:

    • Interoperability: Ensuring that modules compiled from different source languages can correctly call each other’s functions and exchange data.
    • Correctness: Guaranteeing that data is interpreted correctly as it crosses the boundaries between the Wasm module and the host or other modules.
    • Toolchain Compatibility: Enabling different compilers and tools to produce Wasm modules that can be linked and executed together.

    The wasm32-unknown-unknown Target

    The wasm32-unknown-unknown is a common target triple used when compiling to Wasm. It signifies a 32-bit Wasm architecture (wasm32) with unknown vendor and unknown operating system. This target is designed to make minimal assumptions about the host environment, making it suitable for environments where there isn’t a well-defined system interface, such as basic web embeddings. Compiling to wasm32-unknown-unknown results in a Wasm module that primarily relies on imports provided by its embedder.

    The C ABI Challenge with wasm32-unknown-unknown

    Historically, the extern "C" ABI used by the Rust toolchain for the wasm32-unknown-unknown target deviated from the emerging standard C ABI for WebAssembly. As highlighted in the Hacker Noon article, this non-standard implementation leaked internal compiler details and could lead to compatibility issues when trying to link Rust-generated Wasm with Wasm compiled from other languages using standard C ABIs (like C/C++ compiled with Clang).

    This divergence was a historical artifact, and while some issues were addressed over time, the fundamental difference remained for the wasm32-unknown-unknown target, unlike other Wasm targets in Rust (like WASI targets) that adopted a more correct ABI definition earlier.

    The Shift to a Standard C ABI in Rust

    To rectify this, the Rust compiler is transitioning to use the standard C ABI definition for the wasm32-unknown-unknown target. This change means that the generated Wasm binaries will adhere to the agreed-upon conventions for passing C-like data types.

    According to the Rust blog post, this transition involves:

    • A future-incompat warning introduced in Rust 1.87.0 (released in May 2025) to alert developers to function signatures that will be affected by the ABI change.
    • A -Zwasm-c-abi=(legacy|spec) flag to allow developers to test their code with the new standard ABI (spec) before it becomes the default.
    • The eventual removal of the flag and the standard ABI becoming the default, expected in the summer of 2025.

    Developers using extern "C" with the wasm32-unknown-unknown target in Rust need to be aware of this change and test their code to ensure compatibility with the new ABI.

    WASI and the Component Model: Towards Higher-Level Interfaces

    While understanding the low-level C ABI is important for interoperability at the function call level, the WebAssembly ecosystem is also moving towards higher-level, more language-agnostic interfaces.

    • WASI (WebAssembly System Interface): WASI provides a standardized set of APIs for Wasm modules to interact with the host system, offering capabilities like file I/O, networking, and access to environment variables. This allows Wasm modules to perform operations traditionally handled by an operating system, making them more capable outside of purely computational tasks.
    • The Component Model: Building on WASI, the Component Model defines a way to structure and compose Wasm modules. It introduces a Canonical ABI, a higher-level agreement on how different components (which can be written in different source languages) can communicate complex data types like strings, lists, and records without needing to understand the low-level memory representation details specific to each source language’s ABI.

    The Component Model and its Canonical ABI are becoming the preferred way to handle interactions between Wasm modules and the host, and between different Wasm modules, abstracting away some of the complexities of the lower-level ABIs like the C ABI. This promotes greater interoperability and allows for more secure and efficient composition of Wasm-based applications.

    Implications for Developers

    For developers working with Wasm:

    • Be Mindful of ABIs: When writing code that will interact across Wasm module boundaries or with the host, be aware of the ABI being used, especially the C ABI for low-level interactions.
    • Test with New Toolchain Changes: If using Rust with wasm32-unknown-unknown and extern "C", test with the new standard C ABI using the -Zwasm-c-abi=spec flag to ensure your code remains compatible.
    • Explore WASI and the Component Model: For building more complex applications that require system interactions or composition of modules from different languages, consider leveraging WASI and the Component Model. These provide a more standardized and robust approach to defining interfaces.
    • Stay Updated on Tooling: The Wasm ecosystem and its tooling are evolving rapidly. Keep your compilers and tools updated to benefit from the latest ABI fixes and features.

    Conclusion

    The WebAssembly ABI is a fundamental aspect of building interoperable and correct Wasm applications. While the wasm32-unknown-unknown target in Rust historically had a non-standard C ABI, the ecosystem is moving towards standardization. Coupled with the advancements in WASI and the Component Model, the future of Wasm development points towards more robust, secure, and language-agnostic ways for modules to interact, unlocking the full potential of this transformative technology.

    Understanding the WebAssembly ABI: Navigating the wasm32-unknown-unknown and the Future Read More »

    Creating a web service in Rust and running it in WebAssembly

    In this blog post, I will show you how to create a simple web service in Rust and compile it to Wasm. Then, I will show you how to run the Wasm service on the server using a Wasm runtime.

    Creating a web service

    To create a web service in Rust, we will use the hyper crate, which is a fast and low-level HTTP library. Hyper provides both server and client APIs for working with HTTP requests and responses. To use hyper, we need to add it as a dependency in our Cargo.toml file:

    [dependencies]
    hyper = "0.14"

    Then, we can write our web service code in the src/main.rs file. The code below creates a simple web service that responds with “Hello, World!” to any GET request:

    use hyper::{Body, Request, Response, Server};
    use hyper::service::{make_service_fn, service_fn};
    use std::convert::Infallible;
    // A function that handles an incoming request and returns a response
    async fn hello_world(_req: Request) -> Result, Infallible> {
    Ok(Response::new(Body::from("Hello, World!")))
    }
    #[tokyo::main]
    async fn main() {
    // Bind the server to an address
    let addr = ([127, 0, 0, 1], 3000).into();
    // Create a service function that maps each connection to a hello_world function
    let make_service = make_service_fn(|conn| async { Ok::<, Infallible>(service_fn(hello_world))
    });
    // Create a server with the service function
    let server = Server::bind(&addr).serve(make_service);
    // Run the server and handle any error
    if let Err(e) = server.await {
    eprintln!("server error: {}", e);
    }
    }

    To run the web service locally, we can use the cargo run command in the terminal. This will compile and execute our Rust code. We can then test our web service by sending a GET request using curl or a web browser:

    $ curl http://localhost:3000
    Hello, World!

    Creating a web service client

    To demonstrate how to use hyper as a web service client, we can write another Rust program that sends a GET request to our web service and prints the response body. The code below shows how to do this using the hyper::Client struct:

    use hyper::{Body, Client};
    use hyper::body::HttpBody as _;
    #[tokyo::main]
    async fn main() {
    // Create a client
    let client = Client::new();
    // Send a GET request to the web service
    let uri = "http://localhost:3000".parse().unwrap();
    let mut resp = client.get(uri).await.unwrap();
    // Print the status code and headers
    println!("Response: {}", resp.status());
    println!("Headers: {:#?}\n", resp.headers());
    // Print the response body
    while let Some(chunk) = resp.body_mut().data().await {
    let chunk = chunk.unwrap();
    println!("{}", std::str::from_utf8(&chunk).unwrap());
    }
    }

    To run the web service client locally, we can use the cargo run command in another terminal. This will compile and execute our Rust code. We should see something like this:

    $ cargo run
    Response: 200 OK
    Headers: {
    "content-length": "13",
    }
    Hello, World!

    Creating a database client


    To make our web service more useful, we can add some database functionality to it. For example, we can store and retrieve some data from a MySQL database using the mysql_async crate, which is an asynchronous MySQL driver based on tokio. To use mysql_async, we need to add it as a dependency in our Cargo.toml file:

    [dependencies]
    mysql_async = "0.28"

    Then, we can modify our web service code in the src/main.rs file to connect to a MySQL database and execute some queries. The code below assumes that we have a MySQL database running on localhost with the default port (3306), username (root), password (password), and database name (test). The code also assumes that we have a table called users with two columns: id (int) and name (varchar).

    use hyper::{Body, Request, Response, Server};
    use hyper::service::{make_service_fn, service_fn};
    use mysql_async::{Pool, Row};
    use std::convert::Infallible;
    // A function that handles an incoming request and returns a response
    async fn hello_world(_req: Request) -> Result, Infallible> {
    // Create a pool of connections to the MySQL database
    let pool = Pool::new("mysql://root:password@localhost:3306/test");
    // Get a connection from the pool
    let mut conn = pool.get_conn().await.unwrap();
    // Execute a query to insert a new user
    conn.exec_drop("INSERT INTO users (name) VALUES (?)", ("Alice",)).await.unwrap();
    // Execute a query to select all users
    let users: Vec = conn.query("SELECT id, name FROM users").await.unwrap();
    // Drop the connection and return it to the pool
    drop(conn);
    // Format the users as a string
    let mut output = String::new();
    for user in users {
    let (id, name) = mysql_async::from_row(user);
    output.push_str(&format!("User {}: {}\n", id, name));
    }
    // Return the output as the response body
    Ok(Response::new(Body::from(output)))
    }
    #[tokyo::main]
    async fn main() {
    // Bind the server to an address
    let addr = ([127, 0, 0, 1], 3000).into();
    // Create a service function that maps each connection to a hello_world function
    let make_service = make_service_fn(|conn| async { Ok::<, Infallible>(service_fn(hello_world))
    });
    // Create a server with the service function
    let server = Server::bind(&addr).serve(make_service);
    // Run the server and handle any error
    if let Err(e) = server.await {
    eprintln!("server error: {}", e);
    }
    }

    To run the web service locally, we can use the cargo run command in the terminal. This will compile and execute our Rust code. We can then test our web service by sending a GET request using curl or a web browser:

    $ curl http://localhost:3000
    User 1: Alice
    User 2: Alice
    User 3: Alice

    Building and running the web service

    To build our web service as a Wasm binary, we need to use the cargo-wasi crate, which is a Cargo subcommand for building Rust code for Wasm using the WebAssembly System Interface (WASI). WASI is a standard interface for Wasm programs to access system resources such as files, network, and environment variables. To install cargo-wasi, we can use the cargo install command:

    $ cargo install cargo-wasi

    Then, we can use the cargo wasi build command to build our web service as a Wasm binary. This will create a target/wasm32-wasi/debug directory with our Wasm binary file:

    $ cargo wasi build
    Compiling hyper v0.14.15
    Compiling mysql_async v0.28.0
    Compiling wasm-service v0.1.0 (/home/user/wasm-service)
    Finished dev [unoptimized + debuginfo] target(s) in 1m 23s

    To run our web service as a Wasm binary on the server, we must use a Wasm runtime that supports WASI and network features. There are several Wasm runtimes available, such as Wasmtime, Wasmer, and WasmEdge. In this blog post, I will use WasmEdge as an example.

    To install WasmEdge follow the instructions on its website.

    Then, we use the wasmedge command to run our web service as a Wasm binary. We need to pass some arguments to enable WASI and network features and to bind our web service to an address:

    $ wasmedge --dir .:. --dir /tmp:/tmp --net 127.0.0.1:3000 target/wasm32-wasi/debug/wasm_service.wasm --addr 127.0.0.1:3000

    We can then test our web service by sending a GET request using curl or a web browser:

    $ curl http://localhost:3000
    User 1: Alice
    User 2: Alice
    User 3: Alice

    Conclusion

    Here we’ve created a simple web service in Rust and compiled it to Wasm. I have also shown you how to run the Wasm service on the server using a Wasm runtime. I hope you have enjoyed this tutorial and learned something new.

    Creating a web service in Rust and running it in WebAssembly Read More »

    CNCF Report: The State of WebAssembly 2023

    WebAssembly (Wasm) is a technology that allows developers to write code in various languages and run it on any platform and environment. Wasm can offer benefits such as performance, security, portability, and interoperability. Some of the use cases of Wasm are:

    • Web development: Wasm can enhance web applications by enabling them to use native code for computationally intensive tasks, such as image processing, video editing, gaming, machine learning, and more. Wasm can also improve the compatibility and usability of web applications by allowing them to use existing libraries and frameworks written in languages other than JavaScript.
    • Edge computing: Wasm can enable edge computing by allowing developers to deploy lightweight and portable code to edge devices, such as IoT sensors, smart cameras, drones, and more. Wasm can also provide security and isolation for edge applications by running them in a sandboxed environment.
    • Serverless computing: Wasm can enable serverless computing by allowing developers to write and run functions in any language and on any cloud provider. Wasm can also reduce the cold start time and resource consumption of serverless functions by using a compact and efficient binary format.
    • Microservices: Wasm can enable microservices by allowing developers to write and run modular and independent services in any language and on any platform. Wasm can also facilitate the communication and integration of microservices by using a common interface and protocol.
    • Machine learning: Wasm can enable machine learning by allowing developers to write and run models in any language and on any device. Wasm can also optimize the performance and accuracy of machine learning models by using native code and hardware acceleration.

    However, despite its potential, Wasm is still experiencing slow adoption in the industry. Some of the reasons for this are:

    • Lack of awareness: Many developers are not aware of the existence or benefits of Wasm, or how to use it in their projects. There is a need for more education and outreach to raise awareness and interest in Wasm among developers.
    • Lack of tooling: Many tools and frameworks that support Wasm are still immature or experimental, or lack features or documentation. There is a need for more development and improvement of the tooling ecosystem for Wasm, such as compilers, runtimes, libraries, SDKs, debuggers, and more.
    • Lack of standards: Many standards and specifications that define the features and functionality of Wasm are still under development or not widely adopted. There is a need for more collaboration and coordination among the stakeholders and communities involved in the standardization process for Wasm, such as the World Wide Web Consortium (W3C), the WebAssembly Community Group (WCG), the Bytecode Alliance, the Cloud Native Computing Foundation (CNCF), and more.
    • Lack of support: Many platforms and environments that could benefit from Wasm do not support it natively or fully. There is a need for more adoption and integration of Wasm by the platforms and environments that developers use, such as web browsers, cloud providers, edge devices, operating systems, and more.

    Wasm is a promising technology that has the potential to revolutionize the development and deployment of applications across various domains and platforms. However, Wasm also faces some challenges and barriers that need to be addressed by the community and the industry.

    CNCF has recently published a report titled “The State of WebAssembly in 2023”, which provides an overview of the current trends and challenges of Wasm in the cloud-native ecosystem. The report is based on a survey of over 500 developers, operators, and decision-makers from various industries and regions.

    The report covers the following topics:

    • The benefits and use cases of Wasm: The report highlights the main benefits of Wasm, such as performance, security, portability, and interoperability. The report also showcases some of the use cases of Wasm in different domains, such as edge computing, serverless computing, microservices, machine learning, gaming, and more.
    • The challenges and barriers of Wasm adoption: The report identifies some of the challenges and barriers that hinder the adoption of Wasm in the cloud-native ecosystem, such as lack of awareness, tooling, documentation, standards, and support. The report also provides some recommendations and best practices to overcome these challenges and barriers.
    • The state of Wasm tools and frameworks: The report analyzes the current state of Wasm tools and frameworks, such as compilers, runtimes, libraries, SDKs, and more. The report also evaluates the maturity and popularity of these tools and frameworks based on various criteria, such as features, stability, performance, community, and more.
    • The future outlook and trends of Wasm: The report predicts the future outlook and trends of Wasm in the cloud-native ecosystem based on the survey results and expert opinions. The report also discusses some of the emerging topics and opportunities for Wasm development and innovation, such as WASI (WebAssembly System Interface), Wasmtime (a standalone Wasm runtime), eBPF (extended Berkeley Packet Filter), and more.

    The report concludes that Wasm is a promising technology that has the potential to revolutionize the cloud-native ecosystem by enabling faster, safer, and more portable applications. However, Wasm also faces some challenges and barriers that need to be addressed by the community and the industry. The report suggests that CNCF can play a key role in facilitating the adoption and advancement of Wasm by providing guidance, support, resources, and collaboration opportunities for the Wasm community.

    You can read the full report from its official website or download it as a PDF file.

    CNCF Report: The State of WebAssembly 2023 Read More »

    WASM for microservices

    In this blog post, I will share with you my experience of trying to use WebAssembly (Wasm) for building microservices. Wasm is a new technology that allows you to run compiled code in a browser or on a server. It promises to be faster, safer and more portable than traditional languages. However, as I learned the hard way, Wasm is not yet ready for prime time when it comes to microservices.

    My project was inspired by a talk I attended at the Open Source Summit North America, where Kingdon Barrett of Weaveworks and Will Christensen of Defense Unicorns presented their findings on using Wasm for microservices. They had spent about three weeks exploring this topic and came up with a simple prototype that failed to meet their expectations. They also shared some of the challenges and limitations they faced with Wasm, such as:

    – **No direct access to the network, file system or strings.**
    – **No standard way to communicate with other services or external APIs.**
    – **No mature tools or frameworks for developing and deploying Wasm microservices.**

    I was intrigued by their talk and decided to give it a try myself. I wanted to see if I could overcome some of the obstacles they encountered and build a working microservice using Wasm. I chose Rust as my programming language, since it has good support for compiling to Wasm and is designed for performance and reliability. I also used WASI (WebAssembly System Interface) and WAGI (WebAssembly Gateway Interface) as the runtime environments for my microservice. WASI provides a set of system calls that Wasm modules can use, such as reading and writing files or opening sockets. WAGI allows Wasm modules to handle HTTP requests and responses using standard input and output.

    My goal was to create a simple microservice that would take a text input from the user and return a summary of it using an external API. I thought this would be a good use case for Wasm, since it involves some computation and data processing that could benefit from the speed and safety of Wasm. I also wanted to test how easy or hard it would be to integrate with an existing service using Wasm.

    The first challenge I faced was how to pass a string as an argument to my microservice. As Barrett and Christensen pointed out, there is no string type in Wasm. Instead, you have to manually allocate memory and copy bytes from one place to another. This is not only tedious but also error-prone. Fortunately, there are some libraries that can help with this task, such as wasm-bindgen for Rust. This library allows you to use Rust types like strings or vectors in your Wasm code and automatically handles the memory management for you.

    The next challenge was how to make an HTTP request to the external API from my microservice. Since Wasm does not have direct access to the network, I had to use WASI’s socket functions to create a TCP connection and send and receive data over it. This was not too difficult, but it required me to write some low-level code that dealt with byte arrays, network protocols and error handling. It would have been much easier if I could use a high-level library like reqwest or curl for Rust, but unfortunately they do not work with Wasm yet.

    The final challenge was how to deploy my microservice and make it accessible from the web. I used WAGI as the gateway for my microservice, which allowed me to run it as a standalone executable on any platform that supports WASI. However, WAGI is still very experimental and lacks some features that are essential for production use, such as logging, monitoring or authentication. Moreover, WAGI does not support HTTPS or load balancing, which means that I had to use another layer of proxy or service mesh to expose my microservice securely and reliably.

    After spending several hours coding, debugging and testing, I managed to get my microservice working. It was able to take a text input from the user and return a summary of it using an external API. However, I was not very satisfied with the result. The code was complex, verbose and fragile. The performance was not impressive either. The microservice took about 300 milliseconds to respond on average, which is not much faster than a typical Node.js or Python service. The deployment process was also cumbersome and insecure.

    In conclusion, I learned that Wasm is not yet ready for microservices. It has some potential advantages over traditional languages, such as speed, safety and portability, but it also has many drawbacks and limitations that make it unsuitable for real-world scenarios. It lacks standardization, maturity and ecosystem support that are essential for developing and deploying microservices effectively. It may improve in the future as more tools and frameworks emerge around it, but for now I would not recommend using it for microservices.

    WASM for microservices Read More »

    WebAssembly Binary Format Review

    In this blog post, we will review the main concepts of WebAssembly binary format, which is a dense linear encoding of the abstract syntax of WebAssembly modules. WebAssembly (abbreviated as Wasm) is a binary instruction format for a stack-based virtual machine, designed as a portable compilation target for programming languages. Wasm can be executed at native speed by taking advantage of common hardware capabilities available on a wide range of platforms.

    The binary format for WebAssembly modules is defined by an attribute grammar whose only terminal symbols are bytes. A byte sequence is a well-formed encoding of a module if and only if it is generated by the grammar. The grammar specifies how to encode each syntactic construct of WebAssembly using a variable-length integer encoding scheme that is similar to UTF-8 and LEB128.

    The binary format has several advantages over a textual format. It is more compact, reducing the size of modules and improving loading times. It is also more efficient to parse and validate, as it can be done in a single pass over the bytes. Moreover, it is designed to be easy to generate and manipulate by compilers and tools.

    The binary format consists of four main components:

    • A module header that identifies the file as a WebAssembly module and indicates the version of the format.
    • A section table that lists the sections present in the module and their sizes.
    • A sequence of sections that contain the actual data of the module, such as types, functions, globals, tables, memories, etc.
    • A name section that provides optional human-readable names for the elements of the module.

    Each section has a unique id and a payload that depends on the section type. The sections can appear in any order, except for the custom sections that must be interleaved with the predefined sections. Custom sections can be used to store additional information that is not part of the core specification, such as debugging symbols or source maps.

    The following diagram shows an example of a WebAssembly binary module with three sections: type, function, and code.

    Module headerSection tableType sectionFunction sectionCode section
    0x0061736d0x030x010x030x0a
    0x010000000x070x010x020x09
    0x600x010x02
    0x000x000x07
    0x010x00
    0x7f0x41
    0x01
    0x10
    0x00
    0x0b

    The module header consists of four bytes that spell out “\asm” in ASCII, followed by four bytes that indicate the version number in little-endian order. In this case, the version is 1.

    The section table consists of a single byte that indicates the number of sections in the module, followed by pairs of bytes that indicate the id and size of each section. In this case, there are three sections: type (id = 1, size = 1), function (id = 3, size = 2), and code (id = 10, size = 9).

    The type section consists of a single byte that indicates the number of function types in the module, followed by sequences of bytes that encode each function type. A function type is encoded as a byte that indicates the form of the type (currently only 0x60 is allowed), followed by two vectors of bytes that indicate the parameter types and the return types respectively. A vector is encoded as a byte that indicates the length of the vector, followed by one byte per element. A value type is encoded as a single byte that indicates its numeric representation: 0x7f for i32, 0x7e for i64, 0xf32 for f32, and 0xf64 for f64. In this case, there is one function type: (func) -> (i32).

    The function section consists of a single byte that indicates the number of functions in the module, followed by one byte per function that indicates its type index. The type index is an unsigned integer that refers to an entry in the type section. In this case, there is one function with type index 0.

    The code section consists of a single byte that indicates the number of function bodies in the module, followed by sequences of bytes that encode each function body. A function body is encoded as a vector of bytes that indicates its size, followed by a vector of bytes that indicates its local variables, followed by a sequence of bytes that indicates its instructions. A local variable is encoded as two bytes: one that indicates its count and one that indicates its type. An instruction is encoded as a single byte that indicates its opcode, followed by zero or more bytes that indicate its immediate operands. In this case, there is one function body with size 7, no local variables, and four instructions: i32.const 1 (opcode = 0x41, operand = 1), call 0 (opcode = 0x10, operand = 0), end (opcode = 0x0b), end (opcode = 0x0b).

    This example illustrates how the WebAssembly binary format encodes modules in a compact and efficient way. For more details on the binary format and its grammar rules, you can refer to the official specification.

    WebAssembly Binary Format Review Read More »

    .NET8 improves WebAssembly support in C# and Blazor

    .NET 8, the next version of Microsoft’s software development framework, will bring new features and improvements for web developers who use Blazor, a technology that allows writing web apps with C# and .NET. Blazor can run on different platforms, such as WebAssembly, ASP.NET Core, or native client apps.

    One of the main enhancements in .NET 8 for Blazor is the ability to combine server-side and client-side rendering with the same component model. This means developers can choose the best rendering mode for each scenario, depending on the performance, interactivity, and scalability requirements. For example, server-side rendering can improve the initial load time and SEO of a web page, while client-side rendering can enable offline support and faster UI updates.

    .NET 8 also introduces two new features for server-side rendering: streaming rendering and client interactivity. Streaming rendering allows sending content updates to the browser as they become available, instead of waiting for the entire page to be rendered. This can enhance the user experience for pages that need to perform long-running async tasks. Client interactivity allows adding JavaScript functionality to specific components or pages, without affecting the rest of the server-side rendered app.
    Another improvement in .NET 8 for Blazor is the support for rendering components outside the context of an HTTP request. This enables generating HTML fragments from Blazor components, which can be useful for scenarios such as sending automated emails. In the future, Microsoft plans to enable static site generation for Blazor, which can improve performance and security.

    .NET 8 also works on optimizing the performance of Blazor WebAssembly, which is a way of running .NET code in the browser using standard web technology. One of the optimizations is the jiterpreter, which is a hybrid mode that combines ahead-of-time compilation and just-in-time compilation. The jiterpreter can speed up the execution of .NET code in WebAssembly by using partial JIT support https://www.infoworld.com/article/3697728/microsoft-net-8-boosts-blazor-webassembly.html. Microsoft claims that it has seen a 20% improvement in UI rendering and a 2x improvement in JSON serialization and deserialization with the jiterpreter.

    Other optimizations for Blazor WebAssembly include leveraging the latest WebAssembly specifications, such as SIMD (Single Instruction Multiple Data), which can boost performance for parallel computations. .NET 8 also supports hot reload for Blazor WebAssembly, which allows applying code changes without restarting the app. Additionally, .NET 8 introduces a new packaging format called Webcil, which is more web-friendly and reduces the size of Blazor WebAssembly apps.

    .NET 8 also brings stability to QuickGrid, a fast data grid component that was introduced in .NET 7 as a preview feature. QuickGrid can display large amounts of data with features such as sorting, filtering, grouping, and editing. Moreover, .NET 8 adds APIs for monitoring activity on circuits in Blazor Server, which can help free up resources by detecting idle or disconnected clients.
    .NET 8 is expected to be released in November 2023. It is currently available as a preview version that can be downloaded from Microsoft’s website . Developers who want to learn more about Blazor can access free tutorials, videos, code samples, and content from Microsoft Learn.

    .NET8 improves WebAssembly support in C# and Blazor Read More »