Wasm `unreachable` instruction executed

Issue Report

Environment

  • Operating System: Ubuntu 20.04 LTS
  • CPU Architecture: 6 cores
  • RAM: (6Gb)
  • Storage: 255Gb Nvme
  • Plot Size: 100G
  • Subspace Deployment Method: Nodes.Guru sh Script

Problem

It was Synced and Runned node.
After unexpected server reboot - I got a error on nodes log.

Already try:

  1. Restart Node, Farmer
  2. Reboot OS

Expected result

  • Synced and Runned node.

What happens instead

  • Sync stuck.
2022-10-20 10:10:30 [PrimaryChain] panicked at 'Storage root must match that calculated.', /root/.cargo/git/checkouts/substrate-7bc20b373ca3e834/1a7c287/frame/executive/src/lib.rs:479:9
2022-10-20 10:10:30 [PrimaryChain] Block prepare storage changes error: Error at calling runtime api: Execution failed: Execution aborted due to trap: wasm trap: wasm `unreachable` instruction executed
WASM backtrace:
    0: 0x21e8 - <unknown>!rust_begin_unwind
    1: 0x12d4 - <unknown>!core::panicking::panic_fmt::hf56410f696eb8099
    2: 0xabe8d - <unknown>!Core_execute_block
    
2022-10-20 10:10:30 [PrimaryChain] 💔 Error importing block 0x6f0a1a6d0cfb0792e294eb9dd46ab5366db8f05f8ad162f707accc32b6098efb: consensus error: Import failed: Error at calling runtime api: Execution failed: Execution aborted due to trap: wasm trap: wasm `unreachable` instruction executed
WASM backtrace:
    0: 0x21e8 - <unknown>!rust_begin_unwind
    1: 0x12d4 - <unknown>!core::panicking::panic_fmt::hf56410f696eb8099
    2: 0xabe8d - <unknown>!Core_execute_block

I would start with trying the basics such as restart/wipe/reinstallation of the script, but I would also advise to speak with the creators of the nodes guru script as this does not follow our general official installation guide. For the active official installation guide you can visit our GitHub or Docs.

GitHub: GitHub - subspace/subspace: Subspace Network reference implementation
Docs: https://docs.subspace.network

Just for the sake of testing, try the official docs and see if the issue replicates.

Node was ALREADY synced and works well.
The issue shows up after unexpected server reboot.

I am already try not once:

  1. Restart Node, Farmer
  2. Reboot OS

Looks like unexpected server reboot caused database corruption. It is best to have UPS and shut it down properly.
There are some improvements that will be done in the upcoming updates, but it is ideal to avoid such situations.

1 Like

HI there,

That was like my case, but the node was not full synced yet, and got sudden reboot from my computer. I was not able to run the node and farm. I had to delete all of them, and starting again…quite frustating…

Version: 0.1.0-b63028b4d03
0: sp_panic_handler::set::{{closure}}
1: std::panicking::rust_panic_with_hook
2: std::panicking::begin_panic_handler::{{closure}}
3: std::sys_common::backtrace::__rust_end_short_backtrace
4: rust_begin_unwind
5: core::panicking::panic_fmt
6: core::result::unwrap_failed
7: <sp_state_machine::ext::Ext<H,B> as sp_externalities::Externalities>::storage
8: sp_io::storage::get_version_1
9: sp_io::storage::ExtStorageGetVersion1::call
10: <F as wasmtime::func::IntoFunc<T,(wasmtime::func::Caller,A1),R>>::into_func::wasm_to_host_shim
11:
12:
13:
14:
15:
16: wasmtime_runtime::traphandlers::catch_traps::call_closure
17: wasmtime_setjmp
18: sc_executor_wasmtime::runtime::perform_call
19: <sc_executor_wasmtime::runtime::WasmtimeInstance as sc_executor_common::wasm_runtime::WasmInstance>::call_with_allocation_stats
20: sc_executor_common::wasm_runtime::WasmInstance::call_export
21: sc_executor::native_executor::WasmExecutor::with_instance::{{closure}}
22: sp_state_machine::execution::StateMachine<B,H,Exec>::execute_aux
23: <subspace_runtime::RuntimeApiImpl<SR_API_BLOCK,RuntimeApiImplCall> as sp_api::Core<SR_API_BLOCK>>::Core_execute_block_runtime_api_impl
24: <core::future::from_generator::GenFuture as core::future::future::Future>::poll
25: <core::future::from_generator::GenFuture as core::future::future::Future>::poll
26: <core::future::from_generator::GenFuture as core::future::future::Future>::poll
27: <sc_consensus_subspace::SubspaceBlockImport<Block,Client,Inner,CAW,CIDP> as sc_consensus::block_import::BlockImport>::import_block::{{closure}}
28: <core::future::from_generator::GenFuture as core::future::future::Future>::poll
29: <core::future::from_generator::GenFuture as core::future::future::Future>::poll
30: sc_consensus::import_queue::basic_queue::block_import_process::{{closure}}
31: <core::future::from_generator::GenFuture as core::future::future::Future>::poll
32: <futures_util::future::future::Map<Fut,F> as core::future::future::Future>::poll
33: <tracing_futures::Instrumented as core::future::future::Future>::poll
34: tokio::runtime::task::raw::poll
35: std::sys_common::backtrace::__rust_begin_short_backtrace
36: core::ops::function::FnOnce::call_once{{vtable.shim}}
37: std::sys::unix::thread::thread::new::thread_start
38: start_thread
39: clone
Thread ‘tokio-runtime-worker’ panicked at ‘Externalities not allowed to fail within runtime: “Trie lookup error: Database missing expected key: 0x89bd1d85f6dd2de28d50f10afbbf947711bd7ff3bfd70cd0bc2e6c9796625477”’, /home/runner/.cargo/git/checkouts/substrate-7bc20b373ca3e834/1a7c287/primitives/state-machine/src/ext.rs:189
This is a bug. Please report it at:
https://forum.autonomys.xyz
node.service: Main process exited, code=exited, status=1/FAILURE
node.service: Failed with result ‘exit-code’.

1 Like

This doesn’t seem related at all, please create a new topic instead