havoc
havoc
About
- Username
- havoc
- Joined
- Visits
- 1,233
- Last Active
- Roles
- Member, OG, Content Writer
- Thanked
- 1102
Comments
-
I mostly don't - found limited value in it. Though that is me a selfhoster - I could see LES provider gang ending up with different conclusions. If I need to I try to aim for the grafana ecosystem. Its not the easiest (netdata is point & shoot)…
-
(Quote) Consider the possibility that some of the "don't do it" are actually serious. I for one decided against it given the lengthy list of negatives. Fraud, angry customers, stupid customers, expectations of support at unreasonable hour…
-
And this is why we can't have nice things
-
My money is on the billionaires. Powering down millions of computers via remote hands ain't gonna be cheap. Besides programmers don't feature in the prophecy (Quote)
-
Recently acquired a fairly powerful dedi (hez ax44) for a project and am thinking it may be overkill. This is a compute task not hosting though. Realised I do actually have enough horsepower already...it's just too spread out. So now wondering if …
-
Not yet, but would be an arch deriv. (Or if insanity strikes on install day nix) Been contemplating it for a while but netflix kept me on Windows. (yes it works on nix but even at 1080p the bitrate ends up much lower). But the N is irritating me r…
-
Surprised X3D parts are being used for servers
-
(Quote) Just because you can doesn't mean you should
-
You sure you don't want to get like a cheap notebook or chromebook or something? SSH on phones is painful af
-
sigh...note that there is a ext4 data corruption issue going around that suggests not updating
-
(Quote) wut? No... Peertube is not a youtube front end. It's got nothing to do with yt at all and is more about the backend. For a yt front end you'd want something like Invidious. Though I think yt has started blocking that. Ympker - I've used t…
-
(Quote) That unfortunately makes sense. I doubt an older AMD card will be straight competitive against nvidia. They have some use if one can get inference running on them though & I can think of 3 ways to potentially do that. No idea how the e…
-
(Quote) That would be cool. And yeah sure can document - similar to llama.cpp unicorn one...i.e. basics (Quote) Guessing a bit here but suspect you'd see a sizable difference for training rather than inference. The Mi50 has literally 2x the memory …
-
(Quote) Let me know when the Mi50 is live - keen to try that out. Very nearly bought a AMD card last round but chickened out because wasn't confident I could get the ai stuff running on amd stack at the time. Given that AMD dropped RoCM support for…
-
SSDs have become sufficiently cheap that HDDs no longer make sense to me outside of data centers and data hoarders. The only HDD storage I still actively use is hetzner storageboxes. And I think maybe the oracle VMs that I'm not actively using are …
-
Better approach imo is just to hard-lock stuff that is super old. The new clueless will create new threads anyway
-
If its gguf format you can literally just download the file and drop it into the \models\ folder That looks like ChatML which is indeed what most of the dolphin mistral variants are tuned on. If you get a quan size that fits into your GPU's vram i…