eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

228
active users

#perf

1 post1 participant1 post today

:freebsd_logo: FBSD 14.x Kernel Build :freebsd_logo:

81 seconds to compile a copy of GENERIC_KCSAN kernel on GhostBSD 24.10 (FreeBSD 14.1 base).

That's generally acceptable performance for an often silent Micro-ATX workstation (EPYC 4564P 16C/32T, 4.5GHz, 128/ECC, MB: H13SAE-MF). Potential improvements abound, sort of, given two requirements:

1. Low-dB acoustics, not "pitperf maxxing*"
2. Usage for mid-level Ai/ML on VMs for LLMs

What could be improved?
a) Upgrade GPU: 2x A4000 →Ada Gen
b) Upgrade NVMe: 2x M.2 PCIe Gen4 → Gen5
c) Swap 4x 32GB ECC → 4x 48GB ECC
d) Swap 4x DDR5-4800 → DDR5-5200

Cost/Benefit on those potential upgrades?
a) Cost = $$$, Benefit = ~10-25% vector perf
b) Cost = $, Benefit = ~1.5x I/O perf
c) Cost = $$, Benefit = 128GB → 192GB 🤤
d) Cost = $$$, Benefit = not a big deal

* PiT-Perf == Point In Time Performance
* Maxxing == Engaging in Applied Maximalism

#freebsd#foss#oss
Continued thread

…The

“Let’s load a progress bar so we can take over 30 seconds to compile and render a 250 row pseudo-table and tell ourselves that this is #perf ormant”

…guide to React-ing to requests

- - -

The

“Let’s set targets for ‘net zero’ that are beyond the current parliament and tell ourselves we are world leaders”

…guide to reacting to scientific findings

Does anyone have any good #linux or #bsd resources for explaining the nitty gritty details of how #perf tends suffers under max load due to timing windows being missed, tasks needing to be retried, etc? I'm thinking about process stalls due to higher iowait, memory allocation pressure leading to inefficient paging, disk command queue saturation leading to inefficient process wake-ups, etc. Conference slides or recordings would be great. Blog posts too.

#Scala #data #perf question:
What is better : intersection of Sets & filter keys or several contains, or something else?

I have a Map[Key, Facts] of 1k~30k entries.
I have a list of (user submitted, variables) filters to apply :

  • (k1) some are: a subset of Keys to keep (ie I have the collection (array I think) of valid keys)
  • (k2) some are of the form Facts => Boolean.

I can have 0, one, several of each case, but several is always small (10 is a good upper bound heuristic), and at least one (I managed efficiently the case for 0 filters ;))

For the second kind k2, it's better to combine them all and only traverse the map one time.

For the first kind, I don't know if it's better to :

  • pre filter the set of keys in the map, and if so what is better (start by intersection of each set, how to do it, etc)
  • combine with the kind k2 and add a "set contains key" for each node
  • something else?

Any insight on time/memory/complexity/data structure to use would be much appreciated (links toward papers/resources welcomed, it looks like a classical problem)

Thanks!