co-location advises

gksgks OG
edited January 2021 in Help

We are very early stage IoT and Analytics company [not in hosting business], after working with nice good hosts [netcup, hetzner] VPS systems, now we are moving some server into co-locations in India.

We bought used servers, setting up and testing them for clusters LXD at this moment. This is our first time experience handling barebones and co-location.

Do we need firewall for Data Center or manage with Linux Ubuntu firewall?
We will have a switch for internal networking.
Data Center has redundant power supply.
Got quotation for 1 kVA, we have 4 x server + 1 switch, If I understand some math, it can handle 800 Watts.

Is there any checklist for co-location?

Comments

  • Spare parts?

    Thanked by (1)gks
  • Strongly recommend having a fully redundant off-site location in the event your datacenter encounters issues. You can get away with using a software firewall, but a hardware firewall is definitely a good thing to have especially since you may be dealing with personal information. Just try not to wing any aspect of this since it's business. One data leak or large outage and you might not recover. I would consider bringing in a consultant who has experience in this sort of aspect as they can look at your requirements and infrastructure, and determine the best plan going forward.

    Thanked by (2)yoursunny gks

    Cheap dedis are my drug, and I'm too far gone to turn back.

  • Ian_Dot_TechIan_Dot_Tech Hosting Provider

    A hardware firewall would generally not be needed. In regard to "Is there any checklist for co-location?"

    • You did not state how much bandwidth you are getting and port speed.
    • Does the data center have an uptime SLA or DDoS protection?
    • What is the remote hand's fee structure look like?
    • If you do remote hands in person do you have access to the facility?
    • Is their support 24/7?
    • What is the deracking fee to remove the system?
    Thanked by (2)gks imok
  • YmpkerYmpker OGContent Writer

    Maybe @SGraf can chime in? I think he might know a thing or two about colocation :)

    Thanked by (1)gks
  • @imok said:
    Spare parts?

    Valid point, thanks for it, may need to have spare part ready if server fails.. + increase number of servers to ensure HA. We pick only commonly available servers + parts in market, than rare or less popular servers.

  • @CamoYoshi said:
    Strongly recommend having a fully redundant off-site location in the event your datacenter encounters issues. You can get away with using a software firewall, but a hardware firewall is definitely a good thing to have especially since you may be dealing with personal information. Just try not to wing any aspect of this since it's business. One data leak or large outage and you might not recover. I would consider bringing in a consultant who has experience in this sort of aspect as they can look at your requirements and infrastructure, and determine the best plan going forward.

    off site, we are planning to have a backup. All offsite and DC servers shall be connected over VXLan/Wireguard interface.
    In fact, we are pushing towards hybrid architecture, keep some with AWS or hetzer and move the hectic computation and long memory requirements to co-located servers. We are yet to finalise the deal, having a consultant would help here. We will co-locate and run the servers in our office environment for 1 months to ensure we have enough understanding before moving to DC

  • @Ian_Dot_Tech said:
    A hardware firewall would generally not be needed. In regard to "Is there any checklist for co-location?"

    • You did not state how much bandwidth you are getting and port speed.

    Right now, 500 GB, DC seems cheap compared to other area, quoted INR 500 [about 6 or 7 Euros per month] for another 500 GB.

    • Does the data center have an uptime SLA or DDoS protection?

    Uptime, yes, DC talk about, redundant power supply, still Tire 2 DC, fits our budget. We have 3 components in our systems, Data collection, Data Publishing and Data Processing. Data collection and Data Processing shall be part of AWS, data processing shall be moved to DC and Offsite. DDoS, we never asked, mostly this is not public IP, no port opened except Wireguard one outside.

    • What is the remote hand's fee structure look like?

    Yet to check, will talk to them

    • If you do remote hands in person do you have access to the facility?

    They said yes, the place is 19 hrs journey from my city. Will talk to them in detail

    • Is their support 24/7?

    Not checked yet, will add to check list.

    • What is the deracking fee to remove the system?

    Not checked yet, we will check them again.

  • SGrafSGraf Hosting ProviderServices Provider
    edited January 2021

    @gks said:
    We are very early stage IoT and Analytics company [not in hosting business], after working with nice good hosts [netcup, hetzner] VPS systems, now we are moving some server into co-locations in India.

    We bought used servers, setting up and testing them for clusters LXD at this moment. This is our first time experience handling barebones and co-location.

    Do we need firewall for Data Center or manage with Linux Ubuntu firewall?
    We will have a switch for internal networking.
    Data Center has redundant power supply.
    Got quotation for 1 kVA, we have 4 x server + 1 switch, If I understand some math, it can handle 800 Watts.

    Is there any checklist for co-location?

    First thing to consider is actually with your data-center options -> How far away is the data-center.

    Next up you have to determine your space and usage requirements. (ie how many u's of rack space , how much power and bandwidth you need)

    Once you have all that consider this: do you want to colo/use a dc so the hardware is not on premises or to get closer to your target customers. If its just #1 -> choose a data-center near you over one far away. It will make your life much easier if you want to get stuff changed/added,...
    if its #2 check what the remote hands is (in terms of $, scope of work carried out and so on.) How much does racking your gear cost, de racking,...

    How do you store and handle spare parts?


    Lets talk about space. You can either place your stuff in a shared rack 1u/server at a time, or get a quarter, third, half, or full rack for your gear at the data-center. Advantage to the shared rackspace stuff is simple. low initial cost of investment, you really don't need much as the dc will most likely supply you with a network drop. all you need is basically the correct power cable and if your server is long a railkit (sometime dc's can supply you a shelf instead). The downside is, you will not get much custom networking done.

    if you want to get more stuff placed in a dc, or you want custom networking options, then you will most likely end with a "compartment" of a rack (ie quarter, half, ...) that is individually lockable. Chances are good that in addition to your own gear you will need a least a switch. You will commonly find at least 2 network drops/up-links. In both cases inquire about the depth of the rackmount-area itself and make sure your gear fits (post to post . on less than full rack setups sometimes you have cable guides int he way on the rear preventing you from using the full depth). On the 1/4 rack and up: check how many u are usable and if there is stuff such as a patch panel or power distribution unit blocking some u's on the back side.

    On the quarter/half/full rack setups you will commonly have the option to opt out of the data-center doing all the routing for you and have the option to bring your own routers. If you don't see any benefit from this check with the dc, if they can supply your networking with some type of redundancy such as vrrp or carp. A good middle ground with 2 uplinks can also be one of those "layer3" switches that does a bit of routing/load balancing over 2 datacenter supplied uplinks, if you do not mind the single point of failure.

    On the single u /shared rackspace you will commonly have a power allowance. On the bigger units you either get a power allowance or a price per kwh of power consumed. measure your gear before and do a bit of math to see what option to choose here. If you get a full rack,... check if you can upgrade power later if you need it.

    So lets talk about band-with. With "hetzner",... you usually get flat rate/fair use traffic. With other hosts such as me you get a transfer allowance (ie 5/10/50/...TB per month). In data-centers its very common to find 95th percentile billing. you might want to read up on it. if it doesnt support your use-case, inquire with the dc about either flat rate or per tb billing. When it comes to this and IP-addresses: get quite a bit more than you think you will need in the next x years, or you risk running into problems.


    Do we need firewall for Data Center or manage with Linux Ubuntu firewall?

    Depends on your setup, what are you going for? I would always recommend system hardening,...

    We will have a switch for internal networking.

    Most of the time this will mean you are looking at either getting an extra u of rackspace for the switch (if they let you bring your own) on shared cabinets, or you will need to go at least with 1/4 rack (if available)

    Data Center has redundant power supply.

    If they have two power feeds, and you want to use them properly: do you have dual psu servers? if not look into a transfer-switch (and see if that is permitted in your dc)

    Got quotation for 1 kVA, we have 4 x server + 1 switch, If I understand some math, it can handle 800 Watts.

    ... Servers don't draw a fixed amount of power. some draw very little (sub 60w under full load, others go way over 200) measure your gear in idle and in peak load (i recommend a monero/bitcoin/... cpu miner or something, let it run a bit until the fans are on full).

    The rest of the stuff @Ian_Dot_Tech got done very well

    Thanked by (1)Ympker

    MyRoot.PW ★ Dedicated Servers ★ Web-Hosting ★ LIR-Services ★
    MrVM ★ Virtual Servers ★
    Blesta.Store ★ Blesta licenses and Add-ons at amazing Prices ★

  • @SGraf said:

    @gks said:
    We are very early stage IoT and Analytics company [not in hosting business], after working with nice good hosts [netcup, hetzner] VPS systems, now we are moving some server into co-locations in India.

    We bought used servers, setting up and testing them for clusters LXD at this moment. This is our first time experience handling barebones and co-location.

    Do we need firewall for Data Center or manage with Linux Ubuntu firewall?
    We will have a switch for internal networking.
    Data Center has redundant power supply.
    Got quotation for 1 kVA, we have 4 x server + 1 switch, If I understand some math, it can handle 800 Watts.

    Is there any checklist for co-location?

    First thing to consider is actually with your data-center options -> How far away is the data-center.

    Next up you have to determine your space and usage requirements. (ie how many u's of rack space , how much power and bandwidth you need)

    Once you have all that consider this: do you want to colo/use a dc so the hardware is not on premises or to get closer to your target customers. If its just #1 -> choose a data-center near you over one far away. It will make your life much easier if you want to get stuff changed/added,...
    if its #2 check what the remote hands is (in terms of $, scope of work carried out and so on.) How much does racking your gear cost, de racking,...

    How do you store and handle spare parts?


    Lets talk about space. You can either place your stuff in a shared rack 1u/server at a time, or get a quarter, third, half, or full rack for your gear at the data-center. Advantage to the shared rackspace stuff is simple. low initial cost of investment, you really don't need much as the dc will most likely supply you with a network drop. all you need is basically the correct power cable and if your server is long a railkit (sometime dc's can supply you a shelf instead). The downside is, you will not get much custom networking done.

    if you want to get more stuff placed in a dc, or you want custom networking options, then you will most likely end with a "compartment" of a rack (ie quarter, half, ...) that is individually lockable. Chances are good that in addition to your own gear you will need a least a switch. You will commonly find at least 2 network drops/up-links. In both cases inquire about the depth of the rackmount-area itself and make sure your gear fits (post to post . on less than full rack setups sometimes you have cable guides int he way on the rear preventing you from using the full depth). On the 1/4 rack and up: check how many u are usable and if there is stuff such as a patch panel or power distribution unit blocking some u's on the back side.

    On the quarter/half/full rack setups you will commonly have the option to opt out of the data-center doing all the routing for you and have the option to bring your own routers. If you don't see any benefit from this check with the dc, if they can supply your networking with some type of redundancy such as vrrp or carp. A good middle ground with 2 uplinks can also be one of those "layer3" switches that does a bit of routing/load balancing over 2 datacenter supplied uplinks, if you do not mind the single point of failure.

    On the single u /shared rackspace you will commonly have a power allowance. On the bigger units you either get a power allowance or a price per kwh of power consumed. measure your gear before and do a bit of math to see what option to choose here. If you get a full rack,... check if you can upgrade power later if you need it.

    So lets talk about band-with. With "hetzner",... you usually get flat rate/fair use traffic. With other hosts such as me you get a transfer allowance (ie 5/10/50/...TB per month). In data-centers its very common to find 95th percentile billing. you might want to read up on it. if it doesnt support your use-case, inquire with the dc about either flat rate or per tb billing. When it comes to this and IP-addresses: get quite a bit more than you think you will need in the next x years, or you risk running into problems.


    Do we need firewall for Data Center or manage with Linux Ubuntu firewall?

    Depends on your setup, what are you going for? I would always recommend system hardening,...

    We will have a switch for internal networking.

    Most of the time this will mean you are looking at either getting an extra u of rackspace for the switch (if they let you bring your own) on shared cabinets, or you will need to go at least with 1/4 rack (if available)

    Data Center has redundant power supply.

    If they have two power feeds, and you want to use them properly: do you have dual psu servers? if not look into a transfer-switch (and see if that is permitted in your dc)

    Got quotation for 1 kVA, we have 4 x server + 1 switch, If I understand some math, it can handle 800 Watts.

    ... Servers don't draw a fixed amount of power. some draw very little (sub 60w under full load, others go way over 200) measure your gear in idle and in peak load (i recommend a monero/bitcoin/... cpu miner or something, let it run a bit until the fans are on full).

    The rest of the stuff @Ian_Dot_Tech got done very well

    @SGraf Thank you so much and more detailed discussion, I got to ask a lot with DC. The server we took [used one has 2 reduntant supply, they works too], checking with host and post them here.

  • edited January 2021

    Just colocate at home.
    If internet is strong enough, if IP is static enough, if you have power enough to plug in a small heater stove, then you can colocate at home.
    Just lock the doors and windows and sleep near the server, so nasty thieving peope are kept away.
    Plus you can use the power for heating during winter times... Efficency 200%.

  • mikhomikho AdministratorOG

    Basically the DC only provides a location and Internet connection.

    your check-list should contain the same things as if you hosted it locally.
    Some things on that check-list can be bought by the DC, some things you need to provide yourself.

    Thanked by (1)gks

    “Technology is best when it brings people together.” – Matt Mullenweg

  • @Janevski said:
    Just colocate at home.
    If internet is strong enough, if IP is static enough, if you have power enough to plug in a small heater stove, then you can colocate at home.
    Just lock the doors and windows and sleep near the server, so nasty thieving peope are kept away.
    Plus you can use the power for heating during winter times... Efficency 200%.

    Have 2 workstations, wireguard, home broadband, home UPS, backup generator from apt, they have uptime more than 35 days now.
    Setup 4 servers at office lab, wireguard, broadband, UPS, 14 days uptime since setup done.

    This lead us to use co-location of few servers, gaining some confident every day, LES is super helpful

Sign In or Register to comment.