RISC OS Today and Tomorrow: What Would Really Benefit the Platform

Author: Paolo Fabio Zaino.

RISC OS is a unique operating system with a long and important history. It is relatively fast, elegant, and responsive, and people who use it enjoy it precisely because it feels different from mainstream systems. At the same time, RISC OS has been adapted to run on increasingly modern hardware, and that creates a growing gap between what the hardware can do and what the operating system is able to use.

This article is not about turning RISC OS into Linux, Windows, or macOS. It is about identifying which modern capabilities would genuinely improve the everyday user experience on today’s RISC OS, and then explaining how those same challenges are being addressed in Merlin.

The goal is clarity, not ideology.


What users actually experience today

On current RISC OS systems, most users recognise a familiar pattern:

  • The system feels very fast when lightly loaded.
  • Running several tasks at once can reduce responsiveness (especially if such tasks’ code is not carefully written with cooperative multitasking in mind).
  • Heavy workloads impact the whole system, even if the user is only interacting with a single application.
  • Modern hardware often feels underused.
  • Performance of video applications or network applications does not match those on other operating systems like Linux running on the same hardware.
  • It is not uncommon for users to reach for [ALT] + [BREAK] keys to interrupt some tasks that became unresponsive and impact the entire system.
  • Sometimes a RISC OS machine just freezes, a user may not be sure why, and the only thing left to do is press the reset button. RISC OS code often relies on careful interrupt-state discipline; mistakes around interrupt masking/restoration can have severe system-wide effects.

Understanding the original design intent of RISC OS

These behaviours are not bugs. They are direct consequences of architectural choices made at a time when single-core CPUs were the norm, memory bandwidth was limited, and hardware resources were extremely constrained. They are also not the result of poor engineering. The original RISC OS developers did an outstanding job given the hardware constraints, development tools, requirements, and timelines imposed by Acorn Computers in the mid-1980s.

To understand why RISC OS behaves the way it does, it is important to look at the systems it was originally designed for. Early Archimedes machines included DMA support, but only in a limited and specialised form, were often deployed without hard disks, and had planned configurations as low as 256 KB of RAM. These constraints strongly influenced the operating system’s structure and assumptions.

The limited use of the DMA on the early machines is particularly telling. On systems designed for preemptive multitasking at the time, DMA was essential to allow the CPU to continue executing other tasks while waiting for I/O operations to complete. On the original Archimedes, most I/O operations instead occupied the CPU directly (the DMA was used only for double buffering sound data, a linear buffer for the mouse sprite, and a circular buffer for the video refresh). This suggests that the system design strongly favoured a single-task execution model where the system simply waited for I/O data to become available. A more generalised DMA was added much later on the RiscPC-class machines and OS versions introduced a DMA Manager and DMA became available to a wider range of devices.

Similarly, planned (and luckily never released) machines with only 256 KB of RAM would never have been capable of running a full desktop environment with multiple applications. This suggests the intent that early versions of RISC OS were not designed with general-purpose multitasking in mind.

In that context, RISC OS 1.0 can be seen as a direct continuation of Acorn’s earlier operating system design, effectively Acorn MOS adapted to the ARM architecture. It was intended as a fast, responsive system for interactive use, not as a multi-user or multitasking operating system in the modern sense.

Furthermore, this scan of an original email between Arthur’s (RISC OS 1.0) project lead Paul Fellow and Sophie Wilson (at Acorn Computers) provides documentary evidence that the origins of RISC OS lay in the porting of Acorn MOS to the then-new ARM platform.

This also helps explain why, for example, the RISC OS API can appear “old-school” in style, resembling the calling conventions of classic microcomputer monitor routines rather than the more defensive and structured system call interfaces that became common later. RISC OS SWIs generally trust the caller to supply valid parameters, much like ROM monitor subroutines of the late 1970s and early 1980s, reflecting the environment and assumptions in which the system originally evolved.

If a fairer comparison is desired when evaluating modern RISC OS, it is instructive to compare it with Linux running on very constrained hardware, such as a Raspberry Pi 1. Even today, such a comparison clearly illustrates why RISC OS was designed the way it was, and why many of its original design choices remain internally consistent when viewed in their proper historical and hardware context.

However, understanding what could improve RISC OS starts with understanding how these limitations affect everyday use.


Multi-core support: smoothness, not raw speed

When people hear “multi-core”, they often think only in terms of speed. For RISC OS, the real benefit is smoothness.

On a single-core system, all tasks share the same processor by taking turns. The operating system rapidly switches between tasks, either by giving each one a small slice of time or (in a cooperative scheduler) using tasks that give back control to the OS. This allows multiple tasks to exist at once, but they are not truly running at the same time.

On RISC OS, as more tasks are added, given that each task is in control of their own time slicing, the division is not as fair as it happens on a preemptive scheduler where the OS is in control. Eventually the system can become unresponsive (I have recently demoed a workaround to this using managed code via my Ultima VM, where instead the system remains responsive, however, to do so, the VM reduces the time-slots to each Ultima-Task running).

With access to multiple cores, the operating system can place different tasks on different cores. A heavy computation can run on one core while the desktop, input handling, and audio remain responsive on another.

For a general user, this means:

  • Fewer pauses when doing background work.
  • Smoother window movement under load.
  • The ability to run demanding tasks without the system feeling “busy”.

It does not make every task faster (especially when such tasks are historically single-threaded), but it makes the system more predictable and pleasant to use. It’s also important to note that handling multi-cores means also adding more synchronization code to the kernel which can slightly increase kernel overhead on a single core system. This is generally a fair price to pay for the benefits that come from adopting multi-core systems.

As application development complexity progresses, new applications might be developed to use internal multi-threading, such an approch could benefits (albeit the limit there would be controlled by the Amdahl’s law) by multi-core support and so gain performance.

Obviously in situations with light load, applications will also experience some performance improvement due the fact that there will be “more CPU cycles” available for the few running apps.

The challenge here for traditional RISC OS to adopt multi-core support is it’s internal architecture. most of it assumes single tasking and the WIMP (RISC OS Desktop) doesn’t posses thread-safe queues. Albeit in this scenario the WIMP could work on a single core, the rest of the kernel will need substantial work.


Concurrency and parallelism in simple terms

Without multi-core support, the OS is dealing with many tasks at once by sharing time. This is concurrency.

With multi-core support, the OS can actually do multiple things at the same time. This is parallelism.

From a user perspective, this difference shows up as the system remaining responsive while work is happening, instead of feeling like everything slows down together.

Multitasking on Single CPU systems vs Multi-CPU systems

Multitasking on Single CPU systems vs Multi-CPU systems


Better compilers: invisible but powerful improvements

Programming language compilers affect users even if they never write code.

Modern CPUs, including ARM CPUs, provide specialised instructions that can process multiple values in a single operation, perform certain operations faster and even help with processing decimal numbers faster. These are especially useful for graphics, audio, video, compression, and encryption.

When compilers understand how to use these instruction set extensions automatically, compiled applications become faster and more efficient without being rewritten.

For users, better compiler support results in:

  • Faster image processing.
  • Smoother audio and video playback.
  • Shorter wait times for file operations or complex computations.
  • Lower power usage for the same tasks.

This is one of the most cost-effective ways to improve performance across the entire software ecosystem.

One of the historical issues with RISC OS is its dependency on Acorn/Castle/ROOL’s DDE and in particular to Norcroft C compiler which do not seem very capable of understanding modern ARM features (and also ObjASM has its limitations). On the contrary compilers like GCC and Clang do a much better job at this.


SIMD: doing more work per CPU cycle

SIMD, which stands for Single Instruction, Multiple Data, allows the CPU to apply the same operation to many pieces of data at once.

This is particularly valuable for workloads that RISC OS users encounter regularly, such as graphics rendering, sound processing, and data manipulation.

From an everyday perspective, SIMD means tasks that once felt heavy now complete faster and interfere less with the rest of the system.

SIMD CPUs can use a single instruction to process multiple data

SIMD CPUs can use a single instruction to process multiple data

This one it’s the easier to achieve on traditional RISC OS as it either requires rewriting some portions of the Assembly in area that may benefits from adopting SIMD instructions (which has already been done in a couple of areas like in some memcopy implementation) or improve the Norcroft compiler to support them for the C code that may benefit from them.

 


GPU compute: more than just drawing pixels

Modern GPUs (Graphics Processing Units) are not only for display output. They are extremely good at performing the same operation on large amounts of data.

If the operating system provides a standard way to access GPU compute features, applications can offload heavy work to the GPU instead of relying entirely on the CPU.

For users, this can mean:

  • Smoother graphics and animations.
  • Better video playback.
  • Less slowdown during visually intensive tasks.
  • Better use of modern hardware even with modest CPUs.

In practice, in many cases (read internet browsers, games etc.) GPU acceleration can provide larger real-world benefits than adding more CPU cores, especially on systems with limited memory bandwidth.

differences in parallelism and architecture of GPUs vs CPUs

differences in parallelism and architecture of GPUs vs CPUs

This one will require integration of things like Vulcan and/or OpenGL with, at least the desktop, and this may be a bit of a big task on classic RISC OS.


64-bit support: about survival, not speed

The move to 64-bit is often misunderstood as a performance upgrade. For RISC OS, it is primarily about long-term viability. 64-bit can help in performance too in specific computation types though.

Modern hardware increasingly assumes 64-bit software. Running a 32-bit operating system directly on bare metal becomes more difficult over time.

A 64-bit RISC OS ensures that the platform can continue to run on real hardware rather than being confined to emulation or legacy devices.

For users, this is not about new features. It is about ensuring that RISC OS still has a future on modern machines. However, platforms like the Raspberry Pi 4 are going to be in production until January 2034 (source the official Rapsberry Pi 4 Model B page).


Test automation: stability users can feel

Test automation is the process of testing the system and its changes iteratively. This is a very common practice in modern Software Engineering to catch bugs quickly. Test automation directly affects reliability.

When an operating system has automated tests at multiple levels, developers can make changes with confidence. Regressions are caught early, and improvements do not silently break existing functionality.

For users, this translates into:

  • Fewer random bugs.
  • More stable updates.
  • A system that improves without becoming fragile.

This is a foundation for sustainable development, not a luxury.

RISC OS can adopt this even if most of its code is written in ARM Assembly, as long as such subroutines (and/or SWIs) can be tested in isolation for example, it could adop what is known as Unit Testing, more info about the details of test automation here.


Formal verification: stronger foundations

Formal verification is a process different from testing. Formal verification goes beyond testing by mathematically proving that certain parts of the system behave correctly.

While not visible to users, it reduces entire classes of bugs in critical components such as the kernel.

The practical benefits are:

  • Fewer crashes.
  • More predictable behaviour.
  • A safer base for future features.

This approach is increasingly common in professional systems where reliability matters. While not strictly required in all environments, it is generally recommended in modern system design.

Formal verification will be quite tricky on traditional RISC OS.


Full memory protection: stability you can rely on

One of the most important limitations of the current RISC OS architecture is the lack of full memory protection between applications and the operating system. While RISC OS does provide some degree of task separation, this protection does not extend cleanly across the boundary between user applications and the operating system itself, which is where the most serious stability issues arise.

This means that if one application misbehaves, due to a bug, corrupted data, or unexpected input, it can overwrite memory used by the OS itself. When this happens, the result is often a crash, a frozen system, or subtle corruption that appears later and is very hard to diagnose. Some improvements have been done over the years by RISC OS Open, however today’s RISC OS still doesn’t provide full memory protection and, moreover, it offers extremely dangerous mechanisms like OS_EnterOS SWI, which can give full powers to any application.

This behaviour is not unusual for operating systems designed in an era when memory was scarce and software was small. However, on modern hardware, it becomes a major source of instability.

With full memory protection, each application runs in its own isolated memory space. The operating system enforces strict rules about what memory a task is allowed to access.

For everyday users, this has very concrete benefits:

  • A crashing application does not bring down the entire system.
  • Bugs are contained instead of spreading.
  • Long-running systems remain stable for much longer.
  • Debugging problems becomes easier and more reliable.

In practical terms, memory protection turns many system-wide crashes into simple application failures that can be closed and restarted.


Memory protection and performance are not opposites

A common concern is that memory protection makes systems slower. On modern hardware, this is largely untrue.

Modern CPUs are designed to support memory protection efficiently. When done correctly, the overhead is small compared to the gains in stability and predictability. In many cases, performance actually improves overall because the system no longer needs defensive workarounds to deal with corruption and undefined behaviour.

For users, this means a system that feels more solid without sacrificing responsiveness.


Why memory protection matters even more with multi-core systems

As soon as multi-core execution is introduced, the absence of memory protection becomes more dangerous.

When multiple tasks run truly in parallel, memory corruption bugs are harder to reproduce and easier to trigger. A single faulty task can interfere with others at exactly the wrong moment.

Full memory protection is therefore not just a quality improvement. It is a prerequisite for safely scaling RISC OS to multi-core hardware.


What this means for users

From a user’s perspective, full memory protection does not change how RISC OS feels to use. It changes how it fails.

Instead of sudden system-wide crashes, users see:

  • Individual applications failing safely.
  • A system that keeps running.
  • Less fear of experimenting with new software.
  • More confidence in leaving the system running for long periods.

These are quiet improvements, but they are some of the most important ones for the long-term health of the platform.


Memory safety: preventing entire classes of failures

Closely related to memory protection, but conceptually distinct, is memory safety.

Memory safety refers to a set of guarantees that prevent software from accessing memory in unintended or unsafe ways. This includes issues such as:

  • Reading from uninitialised memory.
  • Writing beyond allocated buffers.
  • Using memory after it has been freed.
  • Corrupting internal data structures due to invalid pointers.

These problems are not theoretical. They are a major source of crashes, unpredictable behaviour, and security vulnerabilities across all operating systems and applications.


Why memory safety matters to everyday users

From a user’s perspective, memory safety is not about performance or features. It is about trust and predictability.

When memory safety is weak or absent, a small programming error can:

  • Crash an application unexpectedly.
  • Corrupt unrelated data.
  • Destabilise the entire system.
  • Cause problems that appear long after the original fault.

These failures are often difficult to reproduce and diagnose, which is why users sometimes experience “random” crashes or system instability with no obvious cause.

When memory safety is enforced consistently, many of these failures simply cannot occur.


Memory safety versus memory protection

Memory protection and memory safety are often confused, but they address different problems.

  • Memory protection isolates applications from each other and from the operating system.
  • Memory safety ensures that code accesses its own memory correctly.

Memory protection can stop one application from corrupting another, but it cannot prevent an application from corrupting itself. Memory safety addresses that internal correctness.

For users, the combination of both is what matters:

  • Memory protection limits the blast radius of failures.
  • Memory safety reduces how often failures happen in the first place.

Why memory safety becomes critical on modern systems

As systems grow more complex, memory safety issues become more likely and more damaging.

Modern operating systems:

  • Run many tasks concurrently.
  • Execute code in parallel on multiple cores.
  • Interact with complex hardware and drivers.
  • Run for long periods without rebooting.

In such environments, subtle memory bugs are easier to trigger and harder to isolate. A single error can lead to cascading failures that affect responsiveness, stability, or data integrity.

This is why modern operating system design increasingly treats memory safety as a first-class concern rather than an afterthought.


Memory safety is not tied to a single language

Memory safety is a property of a system, not a programming language.

While some languages make it easier to achieve, memory safety can also be enforced in systems written in C or other low-level languages through:

  • Strict coding rules.
  • Careful API design.
  • Runtime checks.
  • Static analysis.
  • Extensive testing and verification.

What matters is not how memory safety is achieved, but whether it is achieved consistently across the system.

For users, the outcome is what counts.


Virtual memory: context, benefits, and why it remains optional

Virtual memory is a mechanism that allows software to operate on a logical address space that is independent of the physical RAM installed in the system. The operating system maps this virtual address space onto physical memory and, when required, secondary storage.

Many modern operating systems rely heavily on virtual memory. RISC OS, however, was designed in a very different hardware context (albeit it does offer virtual address space).


Historical context and design choices

Early RISC OS systems were typically equipped with:

  • Floppy disks rather than hard drives.
  • Very limited RAM, often well below 4 MB.
  • No efficient hardware support for paging.

In such environments, virtual memory would have offered little benefit and significant drawbacks. Paging memory to floppy disks would have been extremely slow, and the additional kernel complexity would have consumed scarce system resources.

Instead, RISC OS adopted a model based on predictability and efficiency. Core components and commonly used applications were placed in ROM, reducing RAM usage and ensuring fast startup times. Applications were written to operate within strict memory limits and to avoid assumptions about large or expandable address spaces.

These choices were pragmatic responses to real hardware constraints, not oversights.

With the introduction of the RiscPC-class hardware virtual memory could have made more sense, but with the low performance offered by the RiscPC IDE interface and discs, it would probably have resulted in the same bad performance that early Microsoft Windows had for a very long time. The 3rd party solution to introduce a minimal support for Virtual Memory had limited system scope, and so was not as effective as on other systems. Modern machines could benefit from Vrtual Memory given the fast performance of modern storage devices and systems.


Why RISC OS applications use relatively little RAM

RISC OS applications are often perceived as being unusually memory-efficient. This is partly due to careful engineering, but also a consequence of architectural constraints:

  • Applications are designed around fixed memory budgets.
  • There is no pervasive shared library (DLL-style) model.
  • Large in-memory caches are uncommon.
  • Software tends to process data incrementally rather than retaining large working sets.

This approach aligns well with the traditional RISC OS design philosophy and contributes to the system’s lightweight and responsive feel.


Benefits and trade-offs of virtual memory

From a user perspective, virtual memory is not primarily about performance or “having more RAM”. Its main benefits are:

  • More graceful behaviour under memory pressure.
  • Reduced likelihood of sudden allocation failures.
  • Support for larger or more complex workloads.

At the same time, virtual memory introduces significant trade-offs:

  • Increased kernel complexity.
  • Less predictable performance due to paging.
  • Harder debugging and observability.
  • Tighter coupling with memory protection and scheduling mechanisms.

For systems that prioritise simplicity and predictability, these costs are non-trivial.


Why there is no strong need today

At present, RISC OS does not have a compelling, system-wide use case that requires full virtual memory support. Moreover, without other improvements it’s unlikely that user will experience running a large number of tasks at once on a system.

Most applications:

  • Fit comfortably within physical RAM and modern systems are released with increasing amounts of RAM.
  • Do not rely on memory-mapped files or large sparse address spaces.
  • Are not structured around aggressive caching strategies. As far as I know there are only two caching tools (one is CCahe and the other is a File System Cache being developed by RISC OS Developments).

While limited paging support has been added over time for specific use cases, such as Dynamic Areas via third-party solutions, full virtual memory has never been a foundational requirement of the platform.

Virtual memory becomes significantly more important in environments that make extensive use of shared libraries and complex application models. On RISC OS, such patterns exist today mainly as optional (albeit very welcome) additions, for example through UnixLib, rather than as core architectural assumptions.


Virtual memory as a potential option, not a requirement

Virtual memory should therefore be understood as a potential enabling feature rather than a necessary component of RISC OS.

If the platform were to evolve towards:

  • Much more complex applications.
  • Widespread use of shared libraries.
  • Long-running background services.
  • Heavier memory usage patterns.

then virtual memory would naturally become more relevant.

Until such needs clearly emerge, I believe its absence remains a reasonable and deliberate trade-off that preserves the simplicity and predictability many users value in RISC OS.


Preemptive multitasking: why scheduling affects everyday use

Another key factor shaping the everyday experience on RISC OS is how multitasking itself is implemented.

At its core, RISC OS was originally designed as a single-task operating system. Multitasking was later added mainly at the desktop level through the Window Manager, commonly known as the WIMP. As a result, multitasking on RISC OS is not enforced uniformly by the kernel, but instead relies heavily on conventions and cooperation between applications.

This design has two important consequences:

  1. The kernel does not control task scheduling in a preemptive way.
  2. Desktop multitasking relies on applications yielding control voluntarily (return via Wimp_Poll).

These characteristics are fundamental to understanding many of the behaviours users experience today.


Cooperative multitasking and its user-visible effects

In a cooperative multitasking system, applications are expected to return control to the operating system regularly. When applications are well-behaved and workloads are light, this can work surprisingly well, and RISC OS often feels fast and responsive.

Problems arise when an application:

  • Performs a long computation.
  • Waits on slow I/O.
  • Encounters an unexpected condition.
  • Or simply contains a bug.

If an application does not yield control in a timely manner, the desktop can become unresponsive. From a user perspective, this often feels as if the entire system has frozen, even though the issue originates from a single task.

This is why users frequently need to resort to key combinations such as ALT + BREAK to interrupt misbehaving applications. While this mechanism is useful, it also highlights the limits of cooperative multitasking: the system depends on applications behaving correctly in order to remain responsive.


Why this model no longer matches modern workloads

Cooperative multitasking made sense when:

  • Systems were single-core.
  • Applications were relatively simple, and the assumption was that an application would control the OS and not the other way around.
  • Workloads were short-lived.
  • Memory resources were tightly constrained.

Modern usage patterns are very different. Applications are more complex, background tasks are common, and workloads can run for long periods. Under these conditions, cooperative multitasking becomes increasingly fragile.

The issue is not performance in isolation, but control. Without kernel-level enforcement, the operating system cannot reliably prevent one task from delaying or blocking others.

This limitation becomes more serious as systems move towards multi-core execution, where multiple tasks may run simultaneously. Without strict scheduling control, timing issues and unpredictable stalls become harder to manage and harder to diagnose.


Preemptive multitasking: what changes for users

In a preemptive multitasking system, the kernel controls scheduling directly. Each task is given a defined time slice, and the kernel can suspend a task when needed, regardless of whether the application cooperates.

For everyday users, this leads to clear and practical benefits:

  • A single application cannot freeze the desktop.
  • Background tasks cannot monopolise the CPU.
  • Interactive tasks remain responsive under load.
  • System behaviour becomes more predictable.

Importantly, this shifts responsibility away from individual applications and onto the operating system, where it can be enforced consistently.


How preemptive multitasking relates to other modern features

Preemptive multitasking does not exist in isolation. It is closely linked to other architectural features discussed in this article.

  • Memory protection ensures that tasks cannot interfere with each other’s data.
  • Preemptive scheduling ensures that tasks cannot monopolise execution time.
  • Multi-core execution allows tasks to run truly in parallel.

When these features are combined, the system can remain responsive, stable, and predictable even when individual applications misbehave. When they are missing or only partially implemented, failures tend to propagate to the entire system.


Why this matters for the future of RISC OS

Preemptive multitasking is not about copying other operating systems or abandoning RISC OS traditions. It is about ensuring that the operating system can safely support modern workloads and hardware.

As RISC OS continues to evolve and adopts features such as multi-core execution, improved memory protection, and GPU acceleration, relying on cooperative multitasking becomes increasingly limiting.

Understanding this helps explain why certain user-visible problems persist today, and why addressing them requires structural changes rather than incremental fixes.


Moving forward without losing identity

RISC OS does not need to abandon its identity to move forward. Modernising the platform does not mean discarding what makes it special.

The aim is simple: keep RISC OS usable, responsive, and relevant on modern hardware, while preserving the qualities that people enjoy like being relatively simple (this obviously revised within modern requirements and needs), immediate, and graphical environment orientated.

This article exists to explain why these features matter, not just to developers, but to anyone who wants RISC OS to have a meaningful future as a hobby OS or more.


A concrete example: how Merlin addresses these challenges

Several projects are exploring different paths to modernise the RISC OS ecosystem, each with different scopes and priorities. Merlin is one such project, and it is useful to mention it here as a concrete example because it attempts to address all of the architectural challenges discussed in this article together, rather than in isolation.

The purpose of this section is not to promote a specific solution, but to show that these issues are tractable when treated as a coherent system design problem.


Multitasking, multi-core, and responsiveness

Merlin is designed around kernel-level preemptive multitasking from the start. Scheduling is enforced by the kernel rather than delegated to desktop conventions, ensuring that individual tasks cannot monopolise CPU time.

This model scales naturally to multi-core systems, where tasks can run truly in parallel while remaining under strict scheduling control. For users, this translates into predictable responsiveness even when workloads are heavy or applications misbehave.


Memory protection and isolation

Merlin enforces full memory protection between tasks and subsystems. Applications and modules operate within clearly defined memory boundaries, preventing faults in one component from propagating across the system.

This isolation is particularly important when running complex or long-lived workloads and becomes essential in multi-core environments where parallel execution amplifies the consequences of memory corruption.


Memory safety as a system property

Memory safety in Merlin is treated as a foundational design goal, not as an afterthought. The system is structured to prevent common classes of memory errors that historically lead to crashes, corruption, and unpredictable behaviour.

While different techniques can be used to achieve memory safety, the key point from a user perspective is the outcome: fewer unexplained failures, improved long-term stability, and greater confidence in leaving systems running continuously.


Modern toolchains and CPU capabilities

Merlin is designed to make full use of modern compiler toolchains and CPU features, including vector instructions and per-core optimisations. This allows software to benefit from hardware capabilities such as SIMD without requiring application developers to manage low-level details themselves.

The result is better performance across graphics, audio, data processing, and system services, with lower overhead and improved efficiency.


GPU acceleration and graphics flexibility

Merlin introduces a Virtual Display Unit (VDU) architecture that separates front-end display APIs from back-end rendering implementations. This allows the same system to support simple framebuffers, text-only displays, or GPU-accelerated backends depending on the target hardware.

For users, this means smoother graphics, better scalability across devices, and the ability to evolve the graphics stack without rewriting applications.


Test automation and verification

Merlin integrates automated testing and formal verification into its development process. These practices reduce regression risk, improve confidence in changes, and eliminate entire classes of errors before they reach users.

From a user standpoint, this shows up as a system that evolves without becoming fragile, where improvements do not routinely introduce new failures.


A coherent approach rather than isolated fixes

What distinguishes Merlin as an example is not any single feature, but the fact that these features are designed to reinforce each other:

  • Preemptive multitasking supports responsiveness.
  • Memory protection limits fault propagation.
  • Memory safety reduces fault frequency.
  • Multi-core execution improves scalability.
  • Testing and verification improve long-term reliability.

Addressing these concerns together avoids the situation where individual improvements expose new weaknesses elsewhere in the system.


Why Merlin is mentioned here

Merlin is not presented as the only possible future for RISC OS, nor as a replacement for existing systems. It is mentioned here to demonstrate that modernising a RISC-OS-inspired platform requires treating scheduling, memory, safety, tooling, and scalability as parts of a single design problem.

Whether Merlin itself becomes widely used is secondary to the broader point: the challenges described in this article are real, interconnected, and solvable.

Required thanks for this ginormous article:

1) Rick Murray for being the first helping reviewing the entire content!

2) Druck, for his feedback.

3) Nemo, for his feedback.