Do you use cloud/virtual servers or dedicated ones?

I recently switched from cloud servers to dedicated (Hetzner) and I love the better performance, but I am missing a little bit the flexibility I had with cloud servers (hourly billing, instant creation/deletion, easily expandable storage up to 10TB per volume and software support for Kubernetes for both block storage and load balancers). Some maintenance operations in K8s are more complicated with dedicated servers and adding more storage means adding up to 2 drives up to 2 1TB each if I want fast NVMe. Not too bad, but not as flexible.

What do you use/prefer and why? :)

404 Error: Signature Not Found

Comments

  • I install Plesk on dedicated server directly without virtualization.

  • @Amadex said:
    I install Plesk on dedicated server directly without virtualization.

    Yeah I set up Kubernetes directly too, without virtualization so performance is max.

    404 Error: Signature Not Found

  • @vitobotta said:
    Yeah I set up Kubernetes directly too, without virtualization so performance is max.

    proxmox isn't resource intensive.
    in fact with a hypervisor on a dedicated server has many many benefits that outweights the few cpu cycles gained without.

    i can only imagine a hard reboot when some object is misconfigured or some buggy finalizer didn't unmount-release etc..

    i only install k8s on virtualized env.

    btw, what falvour of k8s are you using e.g: k0s, k3s, kubeadmin etc...

  • edited September 2021

    My private resources: virtual, because I have yet to have a use case that would require something as beefy as a dedicated server...
    Resources we use at work: cloud, because even though we do need a lot of resources, so technically renting a dedi would have been cheaper, and regular VPS would help just a bit (in fact one of our environments is a de facto private clooud), using the (public) cloud makes life much easier, since having to manage stuff by yourself is a major PITA (for example, setting up Gluster for k8s PV)

    Having said this, the public cloud costs a lot of money, so as far as my hobby projects are concerned, l don't think I'll ever use it :P (excl. the free tier of Oracle Cloud ;P)

    By the way, there are some hourly dedicated servers too (admittedly, not available at pricing comparable to Hetzner AX series) - 2 low end examples:

    Contribute your idling VPS/dedi (link), Android (link) or iOS (link) devices to medical research

  • @ehab said:
    proxmox isn't resource intensive.
    in fact with a hypervisor on a dedicated server has many many benefits that outweights the few cpu cycles gained without.

    i can only imagine a hard reboot when some object is misconfigured or some buggy finalizer didn't unmount-release etc..

    i only install k8s on virtualized env.

    btw, what falvour of k8s are you using e.g: k0s, k3s, kubeadmin etc...

    Uhm.... interesting. I will look into it. Maybe I can temporarily move my stuff to Hetzner's cloud servers while I set up the dedicated ones. Thanks for the input!

    404 Error: Signature Not Found

  • @chimichurri said:
    My private resources: virtual, because I have yet to have a use case that would require something as beefy as a dedicated server...
    Resources we use at work: cloud, because even though we do need a lot of resources, so technically renting a dedi would have been cheaper, and regular VPS would help just a bit (in fact one of our environments is a de facto private clooud), using the (public) cloud makes life much easier, since having to manage stuff by yourself is a major PITA (for example, setting up Gluster for k8s PV)

    Having said this, the public cloud costs a lot of money, so as far as my hobby projects are concerned, l don't think I'll ever use it :P (excl. the free tier of Oracle Cloud ;P)

    By the way, there are some hourly dedicated servers too (admittedly, not available at pricing comparable to Hetzner AX series) - 2 low end examples:

    This is the first time I hear of hourly-billed dedis. I didn't know they existed

    404 Error: Signature Not Found

  • FYI , @vitobotta i live in Tampere, so if your around and want to have party coffee ping me.

  • @vitobotta said:
    Some maintenance operations in K8s are more complicated with dedicated servers

    Yeah, what I miss the most is the ability of the node to recover when you use something like autoscaling group on AWS.
    But maybe as long as you don’t need to “hyper scale” your cluster (you know, node autoscaler with many additional node at peak, and scale down at low traffic), dedicated server fleet with many spare resources seems to be enough? Just use metallb with ingress, and set the DNS to all of your node with DNS health check, and you are good to go

  • edited September 2021

    I use exclusively virtual servers.
    Each virtual server costs $20/year or less, while every dedicated server would cost $20/month or more.

    I do have workload that can fill a dedicated server.
    For example, I have high speed networking program that can transmit at over 100 Gbps.
    However, these do not speak TCP/IP, and require at least a VLAN and preferably a dedicated fiber.
    (I hear @wdmg has 100Gbps line in their data center, but I don't dare to ask for prices; I'm sure "if you have to ask, you can't afford it").
    Therefore, I just get two servers on a rack locally, and let them talk to each other…

    ServerFactory aff best VPS; HostBrr aff best storage.

  • mikhomikho AdministratorOG

    I use both. Depends on the purpose on what to get and use.

    “Technology is best when it brings people together.” – Matt Mullenweg

  • @ehab said:
    FYI , @vitobotta i live in Tampere, so if your around and want to have party coffee ping me.

    How did you know I live in Finland? :D I live in Espoo btw :D

    404 Error: Signature Not Found

  • @akhfa said:

    @vitobotta said:
    Some maintenance operations in K8s are more complicated with dedicated servers

    Yeah, what I miss the most is the ability of the node to recover when you use something like autoscaling group on AWS.
    But maybe as long as you don’t need to “hyper scale” your cluster (you know, node autoscaler with many additional node at peak, and scale down at low traffic), dedicated server fleet with many spare resources seems to be enough? Just use metallb with ingress, and set the DNS to all of your node with DNS health check, and you are good to go

    So far I'm not having problems, really, it's just that I know I will have more maintenance than before when upgrading K8s, the OS or Longhorn for the storage. Also the storage may be a problem because I will need downtime to add disks and there's a limit to the storage I can have. Other than that I'm loving the dedis!

    404 Error: Signature Not Found

  • @yoursunny said:
    I use exclusively virtual servers.
    Each virtual server costs $20/year or less, while every dedicated server would cost $20/month or more.

    I do have workload that can fill a dedicated server.
    For example, I have high speed networking program that can transmit at over 100 Gbps.
    However, these do not speak TCP/IP, and require at least a VLAN and preferably a dedicated fiber.
    (I hear @wdmg has 100Gbps line in their data center, but I don't dare to ask for prices; I'm sure "if you have to ask, you can't afford it").
    Therefore, I just get two servers on a rack locally, and let them talk to each other…

    Which virtual servers do you pay just $20/year or less? What specs and what provider? Never heard of those prices before.

    404 Error: Signature Not Found

  • @vitobotta said:
    How did you know I live in Finland? :D I live in Espoo btw :D

    you mentioned it at some new thread opening.

  • @ehab said:

    @vitobotta said:
    How did you know I live in Finland? :D I live in Espoo btw :D

    you mentioned it at some new thread opening.

    ah, I thought I had added to my profile but didn't remember about it :)

    404 Error: Signature Not Found

  • edited September 2021

    @vitobotta said:

    @yoursunny said:
    I use exclusively virtual servers.
    Each virtual server costs $20/year or less, while every dedicated server would cost $20/month or more.

    I do have workload that can fill a dedicated server.
    For example, I have high speed networking program that can transmit at over 100 Gbps.
    However, these do not speak TCP/IP, and require at least a VLAN and preferably a dedicated fiber.
    (I hear @wdmg has 100Gbps line in their data center, but I don't dare to ask for prices; I'm sure "if you have to ask, you can't afford it").
    Therefore, I just get two servers on a rack locally, and let them talk to each other…

    Which virtual servers do you pay just $20/year or less? What specs and what provider? Never heard of those prices before.

    Wait for Black Friday and similar sales.

    hostname provider location type RAM disk annual price
    vps0 HostHatch ORD KVM 512MB 250GB HDD $15
    vps1 VirMach LAX KVM 1920MB 45GB SSD $8.88
    vps2 GreenKVM SIN KVM 1GB 30GB NVMe $20
    vps4 Spartan Host SEA KVM 1GB 25GB NVMe $16
    vps5 VirMach BUF KVM 384MB 22GB SSD $4
    vps6 Nexril DAL KVM 1GB 15GB NVMe $8.40 + 24 push-ups
    vps7 VirMach ATL KVM 1GB 25GB SSD $8.87
    vps9 WebHosting24 MUC KVM 1GB 10GB NVMe €10
    box3 Gullo YUL VZ7 NAT 256MB 5GB HDD $2
    box8 WebHorizon WAW KVM NAT 1GB 15GB NVMe $15

    ServerFactory aff best VPS; HostBrr aff best storage.

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @yoursunny said: I use exclusively virtual servers.

    @yoursunny said: over 100 Gbps.

    @yoursunny said: I just get two servers on a rack locally, and let them talk to each other…

    Hi @yoursunny! Would you please share a little about the architecture and operating systems of the "two servers on a rack locally?" Thanks and best wishes from Mexico! :)

    I hope everyone gets the servers they want!

  • @Not_Oles said:
    Hi @yoursunny! Would you please share a little about the architecture and operating systems of the "two servers on a rack locally?" Thanks and best wishes from Mexico! :)

    Server specs are described in this paper, Section 7:
    https://www.nist.gov/publications/ndn-dpdk-ndn-forwarding-100-gbps-commodity-hardware

    YABS here:
    https://talk.lowendspirit.com/discussion/comment/45943/#Comment_45943

    Mellanox ConnectX-5 has PCIe 3.0, 16 lanes.
    Xeon Scalable has 48 lanes per processor, but some are reserved by NVMe, etc.
    To support three Ethernet adapters at full speed, the server must have dual processors.
    Having two NUMA sockets increases memory access latency when packet ingress and egress are on different NUMA sockets.

    My wishlist for a future order would be single EPYC 64-core processor and Mellanox ConnectX-6 200Gbps Ethernet adapter.
    EPYC has 128 PCI lanes, and it's PCIe 4.0.
    I can potentially install six Ethernet adapters on one processor, without dealing with memory latency caused by NUMA sockets.
    However, I haven't found a motherboard for single EPYC that has six PCIe 4.0 slots; the most I've found is four slots.

    Thanked by (1)Not_Oles

    ServerFactory aff best VPS; HostBrr aff best storage.

  • havochavoc OGContent Writer
    edited September 2021

    Using a mix (though generally LES shared VPS not dedi).

    I don't think the two are even competition. The value proposition of something like a big LES VPS is that it is affordable to run it 365. The value proposition of bigcloud (GCP etc) is that it's an ecosystem not a server.

    If you just need a server and go for a cloud provider then you're a moron imo...you're paying for access to all their shiny toys in the ecosystem but not leveraging it

    Lately intrigued by bigcloud more though. Some of their free tier offerings are ridiculous...and the harder it is for the layman to pick up the more generous the offering is. i.e. if you can architect your app to fit their paradigm you can get quite far

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    For convenient reference, the following is a brief excerpt from the cited paper:

    In all the experiments described below, the forwarder is running on a Supermicro 6039P-TXRT server equipped with dual Intel Xeon Gold 6240 CPUs (18 cores at 2.60 GHz, with Hyper-Threading disabled), 256 GB of 2933 MHz memory in four channels (64 × 1 GB hugepages have been allocated to NDN-DPDK on each NUMA socket), and Mellanox ConnectX-5 100 Gbps Ethernet adapters. The operating system is Ubuntu Linux 18.04, with DPDK v19.11 and NDN-DPDK commit 34f561f4ef0e5790d4999107dcbb4c2eab82af66. The forwarder node is connected to two traffic generators, one on each Ethernet port, via direct attach copper cables. The traffic generators emulate a producer application and a number of consumers requesting content from the producer.

    Thanks to @yoursunny for explaining and for terrific work! :)

    I hope everyone gets the servers they want!

Sign In or Register to comment.