Crowd-sourced
AI compute

Cactus provides cheaper compute by running on a global network of mobile devices.

import tf_keras as keras

import cactus as ct


model = keras.Sequential([

keras.layers.Input(shape=(1,)),

keras.layers.Dense(units=1)

])


model.compile(optimizer='sgd', loss='mean_squared_error')


# Only additional code needed

trainer = ct.Trainer(

model, inputs, outputs,

epochs=10, batch_size=2,

)


trainer.fit()

Our mission

More data centers are built around the world, while the majority of global compute sits idle in mobile devices.
Our mission is to unlock that compute.

import tf_keras as keras

import cactus as ct


model = keras.Sequential([

keras.layers.Input(shape=(1,)),

keras.layers.Dense(units=1)

])


model.compile(optimizer='sgd', loss='mean_squared_error')


# Only additional code needed

trainer = ct.Trainer(

model, inputs, outputs,

epochs=10, batch_size=2,

)


trainer.fit()

import tf_keras as keras

import cactus as ct


model = keras.Sequential([

keras.layers.Input(shape=(1,)),

keras.layers.Dense(units=1)

])


model.compile(optimizer='sgd', loss='mean_squared_error')


# Only additional code needed

trainer = ct.Trainer(

model, inputs, outputs,

epochs=10, batch_size=2,

)


trainer.fit()

import tf_keras as keras

import cactus as ct


model = keras.Sequential([

keras.layers.Input(shape=(1,)),

keras.layers.Dense(units=1)

])


model.compile(

optimizer='sgd',

loss='mean_squared_error'

)


# Only additional code needed

trainer = ct.Trainer(

model, inputs, outputs,

epochs=10, batch_size=2,

)


trainer.fit()

import tf_keras as keras

import cactus as ct


model = keras.Sequential([

keras.layers.Input(shape=(1,)),

keras.layers.Dense(units=1)

])


model.compile(

optimizer='sgd',

loss='mean_squared_error'

)


# Only additional code needed

trainer = ct.Trainer(

model, inputs, outputs,

epochs=10, batch_size=2,

)


trainer.fit()

Our product

The first truly on-demand compute framework

Runtime

Today

Nov, 11

Nov, 10

Only pay for runtime

Runtime

Today

Nov, 11

Nov, 10

Runtime

Today

Nov, 11

Nov, 10

We don't rely on data centers, which keeps our costs low. That translates into low pricing for you.

Completed

EchoNet V0.1

Today, 11:50

Running

ML-native framework

Using Cactus requires fewer code changes and is as easy as installing a python package.

45

35

25

15

5

0

-5

Autoscaling. Built-in.

Cactus dynamically splits each workload across as many devices as necessary.

Try the future of AI compute

Get ready to pay less for faster compute.
Coming soon.