The author

Peter Szocs

Consultant

View profile
News & Views / What’s new in Archetype: GBMs, smarter tuning, and a future-ready foundation
23 September 2025

What’s new in Archetype: GBMs, smarter tuning, and a future-ready foundation

The latest Archetype release gives users more flexibility and control when building models. Most notably, the platform now supports Gradient Boosting Machines (GBMs), a popular technique among data scientists for their strong performance and ease of interpretation.

Alongside GBMs, the release also introduces faster hyperparameter tuning and backend improvements. Together, these changes reflect our continued focus on building a platform that supports practical, explainable machine learning without compromising speed or rigour.

Why add GBMs now?

Gradient Boosting Machines (GBMs), including variants like XGBoost, have become a go-to technique for many data scientists in credit and fraud modelling. They offer strong predictive performance across a wide range of problems and are often easier to deploy than deep learning methods.

As IBM puts it, “These techniques are extensively used in machine learning to improve model accuracy, especially when dealing with complex or noisy datasets. By combining multiple perspectives, ensemble learning provides a way to overcome the limitations of individual models and achieve improved optimisation.”

GBM model example

[Image source: IBM: What is gradient boosting?]

There are two main reasons behind their popularity:

  • First, they’re comparatively straightforward to implement, making them more accessible for teams with limited time or resources. 
  • Second, they’re easier to explain. While some modelling techniques are often labelled black boxes, GBMs allow you to quantify the impact of each variable mathematically, which is a key benefit for teams working in regulated environments or needing to maintain stakeholder confidence.

For many firms, it’s also common practice (and in some cases policy) to trial multiple modelling techniques and select the best-performing approach for each use case. Adding GBMs to Archetype makes that process faster and more flexible, without adding complexity to the user experience.

What else is new in this release?

Archetype new look and feel

Alongside GBM support, the latest Archetype update includes several enhancements focused on speed, scalability, and future flexibility:

Faster model tuning with Bayesian search
Archetype now includes a new hyperparameter tuning capability using Bayesian optimisation. Unlike full grid search, which exhaustively tests every combination, Bayesian search converges on optimal settings faster. This is especially useful for GBMs, where model performance can be highly sensitive to parameter choices. Additionally, this reduces the time it takes to reach a strong model and frees users from manual trial-and-error.

Built-in readiness for future modelling techniques
Under the surface, Archetype’s architecture has been upgraded to support additional model types in future. This opens the door to techniques like LightGBM, Random Forests, and even linear models, giving teams more tools to benchmark and test without relying on external platforms.

A refreshed look and feel
This release also includes a visual update to the Archetype interface, bringing a more modern, streamlined look to the user experience.

What this means for model development

These updates give users greater freedom to test, compare, and deploy models, without adding friction to their workflow.

With GBMs now available alongside deep neural networks (DNNs), users can trial multiple techniques on the same dataset in parallel, switching between model types with just a few clicks. This supports better model selection and aligns with common internal modelling policies that require multiple approaches to be evaluated.

The new tuning functionality further accelerates this process. Instead of spending time manually adjusting parameters, users can rely on Archetype to optimise the model configuration automatically, often testing dozens or even hundreds of variations behind the scenes before returning the final result.

In practice, these changes make it easier to find the best-performing model for each use case, whether in fraud prevention, credit decisioning, or portfolio analysis, while saving valuable analyst time.

Explainability and good governance

Archetype has always prioritised model transparency, and that principle remains central as new techniques are added.

GBMs offer a natural advantage here. As a tree-based method, they allow for direct measurement of variable impact and are generally easier to explain to non-technical stakeholders than more complex architectures. This makes them a useful option for teams who want strong performance without compromising interpretability.

The platform also supports stability constraints, helping to ensure that GBM-based models behave predictably over time, which can be a key concern in regulated environments. While DNNs can achieve similar levels of performance, some users may find GBMs a more accessible choice, particularly if they’re already familiar with tree-based modelling approaches.

As always, Archetype offers flexibility for both new and advanced users: pre-configured defaults for those looking to move quickly, and fine-tuned control for teams who want to customise every step of the process, all without needing to write code.

What’s next for Archetype?

This release is part of a broader roadmap to expand Archetype’s modelling capabilities while maintaining a focus on explainability, automation, and practical application.

With GBMs now live, the groundwork has been laid to support additional modelling techniques — including LightGBM, Random Forests, and linear models — in future updates. These will give users more benchmarking options and further flexibility in how they approach different modelling tasks.

The goal isn’t to prescribe a single “best” method, but to enable faster experimentation, more robust comparisons, and better outcomes, whatever the use case. That means continuing to improve the platform’s architecture, expanding its model library, and refining tools that make regulated model development both rigorous and efficient.

Wrap up

Whether you're building fraud models, credit decisioning tools, or portfolio risk engines, Archetype’s latest release gives you more control and less manual effort. With Gradient Boosting Machines now supported, smarter tuning built in, and the foundations in place for future techniques, it’s a practical step forward for faster, more flexible model development.

Ready to see how Archetype could improve your model development?

➡️ Learn more about Archetype or get in touch with our team.