Technitium DNS: Der Geheimtipp unter den Adblockern
#Datenschutz #Test #AdblockListen #AlthaTechnology #Caching #DNSServer #TechnitiumDNSServer #Trackingdienst https://sc.tarnkappe.info/b92216

Technitium DNS: Der Geheimtipp unter den Adblockern
#Datenschutz #Test #AdblockListen #AlthaTechnology #Caching #DNSServer #TechnitiumDNSServer #Trackingdienst https://sc.tarnkappe.info/b92216
Still repeating the same SQL query?
With Temma, cache your results using just 1 BO + 1 cache source.
Simple. Lightweight. Efficient.
See all our tutorials: https://www.temma.net/tuto/mini-cache
Still repeating the same SQL query?
With Temma, cache your results using just 1 BO + 1 cache source.
Simple. Lightweight. Efficient.
See all our tutorials: www.temma.net/tuto
Wrote a short blog post about PHP's OPcache strings buffer and WordPress.
https://roytanck.com/2025/05/15/wordpress-and-phps-opcache-strings-buffer/
#Development #Releases
Redis is open source again · Redis 8 is the first version with the new license https://ilo.im/163me5
_____
#Redis #OpenSource #Caching #DataStorage #Database #NoSQL #WebDev #Backend
RELEASED: LSCache v7.1 for WordPress! In this release: Critical CSS Allowlist, bug fixes, and more! https://wordpress.org/plugins/litespeed-cache/ #litespeed #WordPress #caching
Quick question for folks who understand HTTP caching on reverse proxies like Squid or Cloudflare. If I have a GET REST endpoint responding with 200 OK and the following headers:
Cache-Control: public, max-age=3600
ETag: "123-a"
The proxy should cache and serve the response without hitting the underlying server more than once for the first hour, then send a request with If-Match: "123-a" when the cache goes stale, right? Is there any reason why it wouldn’t?
Although frequently misunderstood, the HTTP Cache-Control header is crucial because it specifies caching mechanisms within requests and responses. In its typical format, it reveals details as to how resources are stored, the location of the resource and the maximum age before expiring…
In our latest blog post, Kieran Larking highlights that the No-cache directive does not prevent caching and looks at typical caching behaviour directives and how to correctly use these directives to balance performance and security: https://www.pentestpartners.com/security-blog/take-control-of-cache-control-and-local-caching/
Discover how #Netflix uses #EVCache to master the complexities of global replication.
Learn about the architecture, design principles, and innovative strategies behind their scalable success: https://bit.ly/3NqFdE4
«#Joblib is a set of tools to provide lightweight pipelining in #Python In particular:
Joblib is optimized to be fast and robust on large data in particular and has specific optimizations for #numpy arrays. It is BSD-licensed.»
Apparently it's time I got some understanding of Arc, RwLock and Mutex, because I can't figure out if I need Arc to use concread::ARCache from #async functions.
It says it's a replacement for the latter two, but the only example I found wraps it in an Arc.
Looking at #RustLang #caching crates and each one looks better than the last!
I'm also struck by the humility and respect for earlier crates in the docs.
Sill looking, but liking #quick_cache.
@0xF21D@infosec.exchange
Not necessarily when you factor in that #Mastodon is the absolute worst #Fedi platform on the planet for #admins and server operators.
I'm sure #Mozilla found out, like most Mastodon operators of large instances, that their #S3 storage #costs ballooned exponentially thanks to Mastodon's abject refusal to turn of media #caching for the whole #fediverse forcing admins and operators to cache and store all media -- from all #instances -- that pass through their instance.
Simply put, Mastodon fucking sucks as a fedi platform for admins and operators and I wouldn't be surprised if you don't see more larger instances folding in the next year.
It's an unsustainable architecture.
RE: https://infosec.exchange/users/0xF21D/statuses/113636032334271805
Google added an HTTP Caching section to its crawler documentation and begged us to allow caching https://www.seroundtable.com/google-crawler-http-caching-details-38547.html via @methode
This #InfoQ article delves into how #Netflix employs #EVCache, a distributed caching solution, to master the complexities of global replication.
Dive deep into the architecture, design principles, and innovative strategies that empower Netflix to operate at an immense scale while maintaining stringent performance standards.
Check out the full article: https://bit.ly/3NqFdE4
This is a great summary of #caching headers, cache busting and related topics: https://csswizardry.com/2019/03/cache-control-for-civilians/
@gerowen i'd say get a v cheap vps and a domain and then reverse proxy a box to avoid cpu and disk quotas - ddns works if your isp is cool with it but with vps you can run vpn and have access to much faster bandwidth - more than likely. also more privacy - your isp won't see your traffic though you may have to use proxychain to connect to some sites. i have pretty slow upload speeds so the vps is a nice way around that #siege #caching proxy
This week I learned that full html page caching for #SSR #nodejs (without static pre-rendering on build time) needs some custom work.
E.g. setting up a cloudflare worker + cloudflare cache in front - or writing a redis response cache, use varnish, etc.
My naive thought was that this is already baked into #SvelteKit, #NuxtJS, #NextJS, etc.
Guess I'm a bit spoiled by Blitz Cache for CraftCMS (PHP) which just takes care of caching html responses + redirecting #caching https://putyourlightson.com/plugins/blitz
Is anyone aware of a more recent alternative to apt-cacher-ng
? I would like to run an apt cache on a small local server. What bugs me about apt-cacher-ng
is that:
(Snooping around in the docs, I realised that all documentation about #caching and #mirroring #apt #repos is fairly outdated. E.g. both #Debian and #Ubuntu docs warn that creating a mirror requires “massive” amounts of storage space, then quantifying that to be around 60 to 80 GB (hihihi).)
An alternative that is mentioned is to use a general purpose caching proxy like squid… I may resort to that.
Part 15: Now, before I forget everything… Can a GitHub Action build and then cache OpenUSD? Yes, Virginia, it can.
EDIT: THE LINK!!! https://www.whynotestflight.com/excuses/hello-usd-part-15-can-a-github-action-cache-a-openusd-build/