This month, the engineering team has had its focus of attention on a range of areas including:
Web3
Codebase
Our codebase has grown quite large over time, and there remain a number of improvements to be implemented prior to mainnet launch.
For the Solidity codebase, our current focus is flexibility and extensibility, enabling the protocol to be able to adapt to new use cases quickly. Once we are satisfied on these fronts, we then of course turn to ensuring security and gas-efficiency.
Despite a re-invigorated recruitment and vetting initiative, Bumper continues to suffer from a shortage of talent to meet engineering challenges, deadlines and budget. Third party providers are being sought to bolster this demand and keep the project on a timely delivery.
Documentation and Enhanced Test Suite
We’ve refreshed and updated our internal developer documentation to bring it up to date with the current codebase to better support new hires to the engineering team.
We’ve also added into our test suites automated security testing tools (such as Slither) to better prepare for our next audit.
Deployment Scripts
Our smart contract deployment scripts have been re-worked to make them completely modular.
Previously, contracts were set up to deploy all protocol contracts in one go, but this approach can make it challenging to determine if (and where) any errors have been introduced. The solution is to deploy contracts sequentially, allowing us to manually check each step, and apply fixes as required.
Additionally, these new scripts allow us to upgrade any individual contract after launch, which will also be critical in the event of, for example, a vulnerability which is discovered post launch.
Oracle Upgrades
We’ve made several upgrades to the protocol in terms of how external information is collected, processed, and passed through into the core.
These refactors were partly necessary to avoid being “built into a corner” by improving the separation of concerns between different modules that make up the protocol.
There are several improvements in this area, which have been built and which are currently in review.
A true “Oracle Module” has been developed to replace the old style of direct price injection into individual protection markets. This involved creating an interface for implementing the processing of information, allowing the protocol to be upgraded to include additional sources of price information, and adding logic to ensure that any information being passed through to a protection market is robust.
The Oracle Module now also supports the addition of other types of information oracles (not price feeds), which can later be included to show how premiums, yields and fees are calculated.
Implementing automatic detection of price-feed failures.
Gas Optimisation
We’ve implemented gas optimisation updates relating to how the premium is calculated, based on changing the price feed from the time-domain to the difference-domain.
This is a mathematical shortcut which exchanges sampling the asset price over time to instead sampling the time at which the asset price changes by a fixed percentage.
This requires some “pre-processing” (made possible by refactoring the protocol to ensure the oracle code is properly modular), wherein the price feeds provided by Chainlink are buffered and transformed before being sent through to Core to perform functions such as calculating premiums and yields, and rebalancing.
Sim
The last month has seen significant progress as we’ve managed to bring in some key hires (contractors) with the right skills and experience to rework several components of the software architecture for the Sim.
While we were originally hoping to put off these upgrades until next year, these became necessary sooner than expected as due to increased program run time, memory usage, and disk usage of the Sim.
Upon further investigation (and taking into account several additional features we needed to include ahead of launch) it became clear that progress on economic modelling would be significantly impacted if this was left unresolved.
In response, we hired some experts to help us out with several software architecture upgrades.
The changes that we needed covered four key areas: configuration management, performance and big data, agent architecture, and a new post-processing framework:
Configuration Management
The first area we tackled was improving the way that we configure the simulation in order to run economic testing.
The increasing number of parameters that can impact the economic results that are produced and analysed demanded a more efficient way to set up, run, analyse and iterate to ensure the process is well-controlled.
This involved swapping out the core simulation engine, upgrading the way that parameters are defined and “swept”, ensuring a strict separation of concerns between parameters and state, removing old code shortcuts that allowed us to run early experiments with the sim, and restructuring nomenclature.
Performance Upgrades
Simulation run times were ballooning to over 8 hours on a modern M1 Mac, and memory usage was similarly troublesome with many simulations failing due to insufficient RAM on our powerful desktop machines.
Several weeks of performance profiling and optimisations were conducted to isolate and treat areas of code which were unreasonably impacting performance. Our simulation run times are now back down to under 20 seconds for Fast Mode and less than 10 minutes for full mode.
This has resulted in an individual machine being able to conduct significantly more analysis on any given day.
We’re now currently looking at disk usage as an area to improve as a single Full-Mode run can produce in excess of 200GB of data!
Agent Architecture
We are mid-way through a rebuild of how we generate and configure the simulation’s agents, as well as adding a slew of features to ensure we have an appropriate level of sophistication in our economic analysis.
Similar to the upgrades to configuration management, we’re upgrading how agent behaviour is defined and set up for a given simulation run.
The new architecture allows us to clearly and flexibly define agent subsets based on their configured behaviours (such as how risk-aware they are, their balance between risk aversion and profit-seeking, responsiveness to protocol changes, etc).
These subsets are also able to be given colloquial names, like “Traders”, allowing us to clearly trace each group’s size and interaction over time as the simulation runs, superseding previously simplistic casts of simple, unidimensional “Takers” and “Makers”.
Agents can now hold both Taker and Maker positions, based on their configured behaviour. They have balances of both Assets and Capital, can have multiple positions, and can now renew positions in full protocol fidelity.
Next Up:
Task
Complexity
Building out an analysis framework and tooling to accompany the aforementioned upgrades
Large task
Aligning the protocol logic with the latest changes
Small task
Running multiple simulations
Large task
DApp
Browser and Device compatibility
We have been working on testing compatibility with, and improving support, for the maximum number of devices, browsers and platforms possible.
Improve dApp User Experience
Navigation efficiency required improvement, as certain areas of core dApp functionality required multiple steps in the interface. To address this, we moved away from users moving through multiple pages whilst executing Protection, Earning and Staking functions, to a single unified console.
We’ve made some tweaks to our design style guide to improve the overall look and feel of the dApp.
Improve dApp performance:
We have been working on increasing the computational performance of the dApp, including improving data load and render times, general state management etc.
Devs will also be focusing on profiling and improving the dapp performance metrics.
Code Reviews
Some bad code has seeped into our dApp codebase that could limit efficient deployment of future updates.
As a result, we are now conducting strict code reviews to ensuring that we are following best practices as well as cleaning up certain areas of the code base.
Whilst this takes some time to complete, it will result in significant benefits when deploying future upgrades.
Disclaimer: Any information provided on this website/publication is for general information purposes only, and does not constitute investment advice, financial advice, trading advice, recommendations, or any form of solicitation. No reliance can be placed on any information, content, or material stated on this website/publication. Accordingly, you must verify all information independently before utilising the Bumper protocol, and all decisions based on any information are your sole responsibility, and we shall have no liability for such decisions. Conduct your own due diligence and consult your financial advisor before making any investment decisions. Visit our website for full terms and conditions.
Bitcoin’s Almost $100K Moment: What’s Next for BTC and Bumper
The road to $100K may have been a near miss, but the path to $140K could be just beginning. Buckle up, and let Bumper take you there with peace of mind.
Change is the heartbeat of blockchain and cryptocurrency. In a landscape where innovation is constant, adaptability becomes paramount. To signify our renewed focus on AI and all its innovation, we've given Bumper a facelift.