TensorDock: Affordable, Easy, Hourly Cloud GPUs From $0.32/hour | Free $15 Credit!

lentrolentro Hosting Provider

TensorDock

Looking for an alternative to big, expensive cloud providers who are fleecing you of money when it comes to cloud GPUs? Meet TensorDock.


TensorDock

We're a small, close-knit startup based in Connecticut that sells virtual machines with a dedicated GPUs attached. Our goal is not to make money. Rather, our primary goal is to democratize large-scale high-performance computing (HPC) and make it accessible for everyday developers.

Why TensorDock?

1. Ridiculously Easy
Your time is money, so we've tried to make your life as easy as possible. We built our own panel, designed for the GPU use case. No WHMCS here. We did things our way. We have an API too.

When you deploy a Linux server, NVIDIA drivers, Docker, NVIDIA-Docker2, CUDA toolkit, and other basic software packages are preinstalled. For Windows, we include Parsec.

2. Ridiculously Cheap
The cheapest VM that you can launch is a $0.32/hour Quadro RTX 4000 + 2 vCPUs + 4 GB of RAM and 100 GB of NVMe storage. If you are running an hourly GPU instance at another provider, check our pricing, and you'll save by switching to us. If you can commit long term, we can give discounts up to 40%, sometimes 60% or higher.

Our pricing is very unique. During our experimentation phase, we purchased a ton of different servers and ended up with a heterogeneous fleet of servers. So, we decided to charge per-resource. Customers are rewarded for choosing the smallest amount of CPU/RAM, and they'll be placed on the smallest host node available. Select your preferred GPU and other configurations and you'll only be billed for what you are allocated. It's that simple.

If you are training an ML model for 5 hours on an 4x NVIDIA A5000, it'll cost you less than $20

3. Live GPU Stock
As of this very moment, we have over 1000 GPUs in-stock, with another 5000 GPUs available through reservation, where you contact us and then we tell our partner cloud providers to install our host node software stack on their idle GPUs. We can handle your computing needs, no matter how large.

The details

Because we charge per-resource, just check out our pricing:
https://tensordock.com/pricing

You can register here:
https://console.tensordock.com/register

And then deploy a server here:
https://console.tensordock.com/deploy

It's that simple.



The LES Offer

Not everyone needs GPUs, especially on a server forum like LES. So, this is more of a soft launch for us before we go onto other ML-related forums at the start of next year :)

This is only for LES users with at least 5 thanks, 5 posts/comments. If you already claimed the signup bonus on LET, unfortunately you can't claim a second one :)

$5 in account credit for registering and posting your user ID

Register: https://console.tensordock.com/register
User ID: https://console.tensordock.com/home (find it under the "Your Profile" box)

Then, post:
Cloud GPUs at https://tensordock.com/, ID [Your User ID]

E.g. if your user ID was recbob0gcd, you'd post:
Cloud GPUs at https://tensordock.com/, ID recbob0gcd

Additional $10 in account credit for creating a server & giving feedback

Once we've given you $5 in account credit, go create a GPU server and give us some feedback on the experience. 2 sentences please! Again, post your user ID with this comment, and we'll give you an additional $10 in account credit. Bonus if you try using our API :)

Goal is to get some feedback to improve the product before we go bigger :)

~ Mark & Richard



Website: https://tensordock.com/
Contact: https://tensordock.com/contact


Questions? Feel free to ask within this thread.

Comments

  • An exotic LES offering! I like it on that basis alone!

    For something named after google's tensorflow I do think you'll need to add GCP pricing to the industry comparison tab though...

    If you've got 1000 GPU in stock you could consider soaking up some of that capacity via spot pricing? Your dedicated pricing seems vaguely competitive with google's spot so presumably going spot to could leapfrog gcp. idk...just speculating here

    Thanked by (2)Ympker lentro
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Hey @lentro! Way cool! Congrats! Best of luck! :)

    Thanked by (2)Ympker lentro

    Tom. 穆坦然. Not Oles. Happy New York City guy visiting Mexico! How is your 文言文?
    The MetalVPS.com website runs very speedily on MicroLXC.net! Thanks to @Neoon!

  • Finally someone utilizes GPUs for something better than buttcoin mining. :relieved:

    Thanked by (1)lentro

    Educationally teaches you with knowledge, while you learn and conglomeratively alluminate your academic intellectual profile: https://lowend.wiki
    „Homo homini rattus.“

  • MasonMason AdministratorOG

    @Janevski said:
    Finally someone utilizes GPUs for something better than buttcoin mining. :relieved:

    What do you think people will spend their free credits on? :trollface:

    Thanked by (2)mfs Janevski

    Humble janitor of LES
    Proud papa of YABS

  • MasonMason AdministratorOG

    @lentro - looks like an awesome project! Best of luck with everything and the console/dashboard looks super nice!

    Thanked by (2)Ympker lentro

    Humble janitor of LES
    Proud papa of YABS

  • lentrolentro Hosting Provider

    Haha, thanks for the support everyone! Happy to give starting credits to anyone :)

    @havoc said: For something named after google's tensorflow I do think you'll need to add GCP pricing to the industry comparison tab though...

    Haha, it's actually named after the Tensor core in general. The tensor core is the type of core that ML uses. It's much faster than normal NVIDIA GPUs' CUDA cores. So our idea was like this is a "dock" for people to load up on tensor computing power for ML needs :joy:
    https://www.nvidia.com/en-us/data-center/tensor-cores/

    @havoc said: 1000 GPU in stock you could consider soaking up some of that capacity via spot pricing

    Actually we don't own all the GPUs. Last month, we got 6 figures in money to play with, which is a few hundred GPUs. The rest are with other companies that we're partnering with. Essentially, we can share capacity to handle surges together. As of right now, around half of our own owned GPUs are being used, the rest idle mining (which by itself is enough to pay back the initial investment cost). My goal is to get this up to 100%, of course, but for now utilization is OK. We will definitely consider adding spot instances, but probably in at least half a year's time given demand is good enough right now :)

  • lentrolentro Hosting Provider

    @Mason said:

    @Janevski said:
    Finally someone utilizes GPUs for something better than buttcoin mining. :relieved:

    What do you think people will spend their free credits on? :trollface:

    LMAO!!!

    Actually earlier last year I watched very closely gpu.land: https://www.producthunt.com/posts/gpu-land
    From what I could tell, their maker "Ilja Moi," isn't actually real but got $50k in AWS credits or something, and he was reselling AWS credits to ML developers, turning $50k in AWS credits into like $20k real cash. Nowadays AWS blocks crypto miners (an insider told me they inspect packets, e.g. a constant stream of small hashes every second to a mining farm pool IP and port 3333 will probably get you terminated), so I think this was a creative way of stealing money from AWS. No idea though :joy:

    But don't get me wrong, we idle mine (Eth on GPUs, Filecoin/Chia/Storj on HDDs, etc...), so 24x7 compute utilization of all resources so we are operationally profitable even with 0 customers (whether we make enough to get a return on our investment is another question). So @Janevski maybe you should still hate me :joy:

    Thanked by (1)Mason
  • lentrolentro Hosting Provider

    @Mason said: @lentro - looks like an awesome project! Best of luck with everything and the console/dashboard looks super nice!

    Thanks! Really glad to see all the support from LES!

    I'm feeling like in a week or two the project will be ready to be posted on Google and start advertising to real ML users! Right now over 1 dozen VMs from LES/LET users, getting a lot of good feedback!

    Thanked by (1)Mason
  • Cloud GPUs at https://tensordock.com/, ID recqcevltm
    Too bad its only available in US locations.

  • lentrolentro Hosting Provider

    @kuroneko23 said: recqcevltm

    Congratulations on being the first, check your account :)

    Surprised that people on LES like to give feedback, and LET has many more people who want the credits

    @kuroneko23 said: US locations

    US is the cheapest for power and bandwidth at the moment. We're thinking about maybe Europe and Asia, but those are long term. Where would you like us to be?

  • lentro said:

    US is the cheapest for power and bandwidth at the moment. We're thinking about maybe Europe and Asia, but those are long term. Where would you like us to be?

    Somewhere in Asia, maybe Singapore or Japan. I use GPU server every 2-3 months for 12 hours, 5 days (to play browser games so RTX 4000 is overkill but eh). Will try the server probably today or tomorrow.

    Thanked by (1)lentro
  • lentrolentro Hosting Provider

    @kuroneko23 said: Will try the server probably today or tomorrow

    Interesting, probably not Asia for a while due to the higher costs there... But in any case, let me know how the server goes!

    Thanked by (1)kuroneko23
  • Only Ubuntu 18.04, 20.04 and Windows 10 BYOL for default OS selection, probably can ask support to install your own OS preference.

    Server Management "Panel":
    Overview :

    Networking :

    Billing :

    Actions :


    YABS :
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2021-12-03                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Wed Dec 29 08:43:26 UTC 2021
    
    Basic System Information:
    ---------------------------------
    Processor  : Intel(R) Xeon(R) Gold 6134 CPU @ 3.20GHz
    CPU cores  : 2 @ 3199.998 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ✔ Enabled
    RAM        : 3.8 GiB
    Swap       : 0.0 KiB
    Disk       : 48.4 GiB
    
    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 94.43 MB/s   (23.6k) | 675.16 MB/s  (10.5k)
    Write      | 94.68 MB/s   (23.6k) | 678.71 MB/s  (10.6k)
    Total      | 189.11 MB/s  (47.2k) | 1.35 GB/s    (21.1k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 1.06 GB/s     (2.0k) | 1.04 GB/s     (1.0k)
    Write      | 1.12 GB/s     (2.1k) | 1.11 GB/s     (1.0k)
    Total      | 2.19 GB/s     (4.2k) | 2.16 GB/s     (2.1k)
    
    iperf3 Network Speed Tests (IPv4):
    ---------------------------------
    Provider        | Location (Link)           | Send Speed      | Recv Speed
                    |                           |                 |
    Clouvider       | London, UK (10G)          | 1.27 Gbits/sec  | 71.3 Mbits/sec
    Online.net      | Paris, FR (10G)           | 1.52 Gbits/sec  | 1.35 Gbits/sec
    WorldStream     | The Netherlands (10G)     | 1.25 Gbits/sec  | 1.29 Gbits/sec
    WebHorizon      | Singapore (1G)            | 334 Mbits/sec   | 319 Mbits/sec
    Clouvider       | NYC, NY, US (10G)         | 1.77 Gbits/sec  | 669 Mbits/sec
    Velocity Online | Tallahassee, FL, US (10G) | 2.55 Gbits/sec  | 2.41 Gbits/sec
    Clouvider       | Los Angeles, CA, US (10G) | 3.49 Gbits/sec  | 7.09 Gbits/sec
    Iveloz Telecom  | Sao Paulo, BR (2G)        | 108 Mbits/sec   | 863 Mbits/sec
    
    Geekbench 5 Benchmark Test:
    ---------------------------------
    Test            | Value
                    |
    Single Core     | 1031
    Multi Core      | 2034
    Full Test       | https://browser.geekbench.com/v5/cpu/11874288
    


    I like their low balance notification alert but it can't be customized.
    Alert Schedule :

    You can also withdraw your balance to your original payment method minus fee.
    Withdrawal :

    Server works just fine (install driver, etc.). Didn't see any resources (cpu/disk/ram) upgrade in the panel, might be useful for some people.
    Only Stripe for deposit, no PayPal.
    Can't change any profile details (email, password, etc.).
    48h standard email support or 15 minutes video call with a representative (which is nice).
    Available Support :

    sorry for the plain "feedback" @lentro , I'm not used to this kind of things
    Thanked by (1)Mason
  • @kuroneko23 said:
    Schedule Video Chat
    Ask us sales questions, follow up with a previously submitted support request, or just hang out and chat about GPUs :)

    Now we know where to find someone when we feel lonely.

    Thanked by (1)chimichurri

    MetalVPS now costs $20.22/month

  • Do you allow mining?

  • @AaronSS said:
    Do you allow mining?

    From OGF

    lentro said :
    So my answer is ideally not. Ideally use your brain to do something like learning ML or Blender. But if you want to mine, your $15 in credits might generate like $3 so I guess sure, if you are fine with getting only a bit of money, go ahead then :joy:

    Thanked by (1)lentro
  • lentrolentro Hosting Provider

    @yoursunny said:

    @kuroneko23 said:
    Schedule Video Chat
    Ask us sales questions, follow up with a previously submitted support request, or just hang out and chat about GPUs :)

    Now we know where to find someone when we feel lonely.

    LOL yes. I didn't want a phone number but I also know how shady a company with only an email address might seem, so I set up a calendly where you can schedule a call at least a day in advance with us so we aren't surprised when someone wants to chat :joy:

  • lentrolentro Hosting Provider

    @kuroneko23 said: probably can ask support to install your own OS preference

    Sure, any OS that supports cloud-init is fair game. Tbh I haven't seen anyone run machine learning on CentOS so I didn't really see the need.

    @kuroneko23 said: might be useful for some people

    Agreed, we'll look into adding support for this! Probably in the next few months or so, the database would need a bit of reworking to support this (as we'd need multiple transactions for a single server, one transaction for each hardware configuration that the server has been provisioned in).

    Thanks for the feedback! Check your account for feedback credits :)

    Thanked by (1)kuroneko23
  • johnkjohnk Hosting Provider

    Fascinating. Never thought I'd find this on LES....

    Signed up, small issue, but first thing I see is this:

    Where'd you pull the performance metrics from? Did you run them yourself For the A/V100s, looks like you have fp16 numbers, not tf32 (See: https://lambdalabs.com/blog/nvidia-a100-gpu-deep-learning-benchmarks-and-architectural-overview/)

    Might be worth looking into standardizing metrics for all GPUs - if memory serves my correctly, AX000 adds tensor cores for FP16, so it should be at least nominally higher than what you have now.

    Second - email went to spam. Might want to look at your DKIM setup?

    mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of tensordock.com designates 136.175.108.144 as permitted sender) [email protected]

    Some OS templates might be a good idea too. Don't make me install jupyter, tf/pyt and wrangle with CUDA myself :)

    Spun up a server, and I swear, I only clicked the button once, but two servers were provisioned:

    Maybe disable the button after it has been clicked.

    Otherwise, it's very nice. Run as expected of course. Personally, I stick with TPUs when possible because they are generally still faster + give more flexibility with VRAM, but you have a very competitive platform.

    A way to resize instance / add compute + GPUs would be great.

    Good luck!

    Thanked by (2)Ympker lentro
  • lentrolentro Hosting Provider

    @johnk said:
    Fascinating. Never thought I'd find this on LES....

    Signed up, small issue, but first thing I see is this:
    Where'd you pull the performance metrics from? Did you run them yourself For the A/V100s, looks like you have fp16 numbers

    Haha thanks for checking it out! As you can see, lots of improvements we can make before we launch on ML forums.
    I added some additional credits to your account as a thanks for your feedback!

    For the GPU metrics, we pulled the numbers from NVIDIA's data sheets (with sparsity):
    https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/a100/pdf/nvidia-a100-datasheet-us-nvidia-1758950-r4-web.pdf

    The numbers we got are next to the "Tensor Float 32" on the right table. We'll probably figure out our own benchmarks for real-world performance like Lambda, as it's hard to compare the different GPUs. The A5000, for example, is our best deal for price-to-performance, I believe, even for machine learning, but the current deploy page doesn't really communicate that.

    Second - email went to spam. Might want to look at your DKIM setup?

    Will look into this, thanks! Gmail seems to go to inbox, and Microsoft to spam, so probably some misconfiguration, good catch!

    Some OS templates might be a good idea too. Don't make me install jupyter, tf/pyt and wrangle with CUDA myself :)

    Agreed, will do ahead of the actual "full" launch :)

    two servers were provisioned:
    Maybe disable the button after it has been clicked.

    It takes 2 seconds for the API to respond confirming the deployment request, so definitely great suggestion with disabling the button after it's clicked so it never gets triggered twice.

    Otherwise, it's very nice. Run as expected of course. Personally, I stick with TPUs when possible because they are generally still faster + give more flexibility with VRAM, but you have a very competitive platform.

    Overall, thanks so much for your feedback! I looked into TPUs but the only one I could find (https://iot.asus.com/products/AI-accelerator/AI-Accelerator-PCIe-Card/) was just too bad, and TPUs can't idle mine so it's not feasible for a startup that needs to maximize revenue. A year ago, I chatted with a Chinese tech giant who now makes AI chips, but given the government sanctions and an inability to market these, I gave up pursuing that. For now, the big clouds have an oligopoly when it comes to specialized hardware. In any case though, I hope the prices are low enough that the price-to-performance ratio is comparable to TPUs and that you can use us for rendering if you ever need to do that in the future :)

  • johnkjohnk Hosting Provider
    edited December 2021

    @lentro said:

    @johnk said:
    Fascinating. Never thought I'd find this on LES....

    Signed up, small issue, but first thing I see is this:
    Where'd you pull the performance metrics from? Did you run them yourself For the A/V100s, looks like you have fp16 numbers

    Haha thanks for checking it out! As you can see, lots of improvements we can make before we launch on ML forums.
    I added some additional credits to your account as a thanks for your feedback!

    For the GPU metrics, we pulled the numbers from NVIDIA's data sheets (with sparsity):
    https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/a100/pdf/nvidia-a100-datasheet-us-nvidia-1758950-r4-web.pdf

    >

    The numbers we got are next to the "Tensor Float 32" on the right table. We'll probably figure out our own benchmarks for real-world performance like Lambda, as it's hard to compare the different GPUs. The A5000, for example, is our best deal for price-to-performance, I believe, even for machine learning, but the current deploy page doesn't really communicate that.

    I thought so - sparsity is performance probably not the best measure to use. It is nuanced to certain models/setups. FP16 would be a better choice probably.

    Otherwise, it's very nice. Run as expected of course. Personally, I stick with TPUs when possible because they are generally still faster + give more flexibility with VRAM, but you have a very competitive platform.

    Overall, thanks so much for your feedback! I looked into TPUs but the only one I could find (https://iot.asus.com/products/AI-accelerator/AI-Accelerator-PCIe-Card/) was just too bad, and TPUs can't idle mine so it's not feasible for a startup that needs to maximize revenue. A year ago, I chatted with a Chinese tech giant who now makes AI chips, but given the government sanctions and an inability to market these, I gave up pursuing that. For now, the big clouds have an oligopoly when it comes to specialized hardware. In any case though, I hope the prices are low enough that the price-to-performance ratio is comparable to TPUs and that you can use us for rendering if you ever need to do that in the future :)

    Ya, GPUs are the AI equivalent to of ASICs for miners. They do one specialized task but do it well. You won't be able to idle mine with these and you are pretty much limited to ML/AI/Tensor based ops. No rendering, gaming, etc.

    Thanked by (1)lentro
  • Very cool! Good luck!

    Thanked by (1)lentro
Sign In or Register to comment.