Google, the Stupidity Amplifier (2012)
49 by Santosh83 | 4 comments on Hacker News.
Monday, August 31, 2020
New top story on Hacker News: Writing a Lisp to x86-64 compiler
Writing a Lisp to x86-64 compiler
7 by tekknolagi | 1 comments on Hacker News.
I'm starting to write a series on compiling Lisp to x86-64 and I would appreciate any and all feedback. Find the first post at https://ift.tt/32En0er
7 by tekknolagi | 1 comments on Hacker News.
I'm starting to write a series on compiling Lisp to x86-64 and I would appreciate any and all feedback. Find the first post at https://ift.tt/32En0er
Sunday, August 30, 2020
New top story on Hacker News: Nagara Rimba Nusa: A Take on Indonesia's New Capital City
Nagara Rimba Nusa: A Take on Indonesia's New Capital City
4 by simonebrunozzi | 1 comments on Hacker News.
4 by simonebrunozzi | 1 comments on Hacker News.
Saturday, August 29, 2020
New top story on Hacker News: Tell HN: Check medium's localstorage if you use adblock
Tell HN: Check medium's localstorage if you use adblock
40 by ev1 | 2 comments on Hacker News.
If you have uBlock or similar, it appears medium logs all analytics pings into HTML5 LocalStorage and will keep retrying to send them (and apparently periodically change domains and subdomains to try and send them). I had tens of thousands of entries in localStorage, wasting quite a bit of space, all of them at least 400-600 characters or more. Each time I scrolled it'd add a few dozen more in, to the point where devtools was freezing. Ridiculous. Example: https://ift.tt/2QAyqu0
40 by ev1 | 2 comments on Hacker News.
If you have uBlock or similar, it appears medium logs all analytics pings into HTML5 LocalStorage and will keep retrying to send them (and apparently periodically change domains and subdomains to try and send them). I had tens of thousands of entries in localStorage, wasting quite a bit of space, all of them at least 400-600 characters or more. Each time I scrolled it'd add a few dozen more in, to the point where devtools was freezing. Ridiculous. Example: https://ift.tt/2QAyqu0
Friday, August 28, 2020
Thursday, August 27, 2020
Wednesday, August 26, 2020
Tuesday, August 25, 2020
Monday, August 24, 2020
Sunday, August 23, 2020
New top story on Hacker News: Ask HN: How to do cross platform GUI?
Ask HN: How to do cross platform GUI?
18 by anang | 28 comments on Hacker News.
In essentially every discussion about desktop applications there are a lot of comments about how not to build desktop apps, but very little sharing of resources showing how to do it right. I’ve seen people defend electron, talk about core logic in a cross platform language and native gui code and any number of other options. As a middle of the road developer I think it’s difficult to find any consensus (besides electron being both simple and hated). What resources are there for building quality, functional cross platform desktop application?
18 by anang | 28 comments on Hacker News.
In essentially every discussion about desktop applications there are a lot of comments about how not to build desktop apps, but very little sharing of resources showing how to do it right. I’ve seen people defend electron, talk about core logic in a cross platform language and native gui code and any number of other options. As a middle of the road developer I think it’s difficult to find any consensus (besides electron being both simple and hated). What resources are there for building quality, functional cross platform desktop application?
Saturday, August 22, 2020
New top story on Hacker News: The Mega-Tsunami of July 9, 1958 in Lituya Bay, Alaska (1999)
The Mega-Tsunami of July 9, 1958 in Lituya Bay, Alaska (1999)
13 by jacobwilliamroy | 3 comments on Hacker News.
13 by jacobwilliamroy | 3 comments on Hacker News.
Friday, August 21, 2020
Thursday, August 20, 2020
Wednesday, August 19, 2020
Tuesday, August 18, 2020
Monday, August 17, 2020
New top story on Hacker News: Launch HN: Batch (YC S20) – Replays for event-driven systems
Launch HN: Batch (YC S20) – Replays for event-driven systems
3 by dsies | 0 comments on Hacker News.
Hello HN! We are Ustin and Daniel, co-founders of Batch ( https://batch.sh ) - an event replay platform. You can think of us as version control for data passing through your messaging systems. With Batch, a company is able to go back in time, see what data looked like at a certain point and and if it makes sense, replay that piece of data back into the company's systems. This idea was born out of getting annoyed by what an unwieldy blackbox Kafka is. While many folks use Kafka for streaming, there is an equal number of Kafka users that use it as a traditional messaging system. Historically, these systems have offered very poor visibility into what's going on inside them and offer (at best) a poor replay experience. This problem is prevalent pretty much across every messaging system. Especially if the messages on the bus are serialized, it is almost guaranteed that you will have to write custom, one-off scripts when working with these systems. This "visibility" pain point is exacerbated tenfold if you are working with event driven architectures and/or event sourcing - you must have a way to search and replay events as you will need to rebuild state in order to bring up new data stores and services. That may sound straightforward, but it's actually really involved. You have to figure out how and where to store your events, how to serialize them, search them, play them back, and how/when/if to prune, delete or archive them. Rather than spending a ton of money on building such a replay platform in-house, we decided to build a generic one and hopefully save everyone a bunch of time and money. We are 100% believers in "buy" (vs "build") - companies should focus on building their core product and not waste time on sidequests. We've worked on these systems before at our previous gigs and decided to put our combined experience into building Batch. A friend of mine shared this bit of insight with me (that he heard from Dave Cheney, I think?) - "Is this what you want to spend your innovation tokens on?" (referring to building something in-house) - and the answer is probably... no. So this is how we got here! In practical terms, we give you a "connector" (in the form of a Docker image) that hooks into your messaging system as a consumer and begins copying all data that it sees on a topic/exchange to Batch. Alternatively, you can pump data into our platform via a generic HTTP or gRPC API. Once the messages reach Batch, we index them and write them to a long-term store (we use https://ift.tt/3g1tMPV ). At that point, you can use either our UI or HTTP API to search and replay a subset of the messages to an HTTP destination or into another messaging system. Right now, our platform is able to ingest data from Kafka, RabbitMQ and GCP PubSub, and we've got SQS on the roadmap. Really, we're cool with adding support for whatever messaging system you need as long as it solves a problem for you. One super cool thing is that if you are encoding your events in protobuf, we are able to decode them upon arrival on our platform, so that we can index them and let you search for data within them. In fact, we think this functionality is so cool that we really wanted to share it - surely there are other folks that need to quickly read/write encoded data to various messaging systems. We wrote https://ift.tt/3jXMFX4 for that purpose. It's like a curl for messaging systems and currently supports Kafka, RabbitMQ and GCP PubSub. It's a port from an internal tool we used when interacting with our own Kafka and RabbitMQ instances. In closing, we would love for you to check out https://batch.sh and tell us what you think. Our initial thinking is to allow folks to pump their data into us for free with 1-3 days of retention. If you need more retention, that'll require $ (we're still figuring out the pricing model). Our #1 goal right now is to chat with folks who have experience in this field and/or have experienced pain in this space. OH, one last ask - if you are data compliance savvy - we'd love to chat with you - we need to store gobs of important data that falls under all kinds of regulations and well, we could use a hand there. OK that's it! Thank you for checking us out! ~Dan & Ustin P.S. Forgot about our creds: I (Dan), spent a large chunk of my career working at data centers doing systems integration work. I got exposed to all kinds of esoteric things like how to integrate diesel generators into CMSs and automate VLAN provisioning for customers. I also learned that "move fast and break things" does not apply to data centers haha. After data centers, I went to work for New Relic, followed by InVision, Digital Ocean and most recently, Community (which is where I met Ustin). I work primarily in Go, consider myself a generalist, prefer light beers over IPAs and dabble in metal (music) production. Ustin is a physicist turned computer scientist and worked towards a PhD on distributed storage over lossy networks. He has spent most of his career working as a founding engineer at startups like Community. He has a lot of experience working in Elixir and Go and working on large, complex systems.
3 by dsies | 0 comments on Hacker News.
Hello HN! We are Ustin and Daniel, co-founders of Batch ( https://batch.sh ) - an event replay platform. You can think of us as version control for data passing through your messaging systems. With Batch, a company is able to go back in time, see what data looked like at a certain point and and if it makes sense, replay that piece of data back into the company's systems. This idea was born out of getting annoyed by what an unwieldy blackbox Kafka is. While many folks use Kafka for streaming, there is an equal number of Kafka users that use it as a traditional messaging system. Historically, these systems have offered very poor visibility into what's going on inside them and offer (at best) a poor replay experience. This problem is prevalent pretty much across every messaging system. Especially if the messages on the bus are serialized, it is almost guaranteed that you will have to write custom, one-off scripts when working with these systems. This "visibility" pain point is exacerbated tenfold if you are working with event driven architectures and/or event sourcing - you must have a way to search and replay events as you will need to rebuild state in order to bring up new data stores and services. That may sound straightforward, but it's actually really involved. You have to figure out how and where to store your events, how to serialize them, search them, play them back, and how/when/if to prune, delete or archive them. Rather than spending a ton of money on building such a replay platform in-house, we decided to build a generic one and hopefully save everyone a bunch of time and money. We are 100% believers in "buy" (vs "build") - companies should focus on building their core product and not waste time on sidequests. We've worked on these systems before at our previous gigs and decided to put our combined experience into building Batch. A friend of mine shared this bit of insight with me (that he heard from Dave Cheney, I think?) - "Is this what you want to spend your innovation tokens on?" (referring to building something in-house) - and the answer is probably... no. So this is how we got here! In practical terms, we give you a "connector" (in the form of a Docker image) that hooks into your messaging system as a consumer and begins copying all data that it sees on a topic/exchange to Batch. Alternatively, you can pump data into our platform via a generic HTTP or gRPC API. Once the messages reach Batch, we index them and write them to a long-term store (we use https://ift.tt/3g1tMPV ). At that point, you can use either our UI or HTTP API to search and replay a subset of the messages to an HTTP destination or into another messaging system. Right now, our platform is able to ingest data from Kafka, RabbitMQ and GCP PubSub, and we've got SQS on the roadmap. Really, we're cool with adding support for whatever messaging system you need as long as it solves a problem for you. One super cool thing is that if you are encoding your events in protobuf, we are able to decode them upon arrival on our platform, so that we can index them and let you search for data within them. In fact, we think this functionality is so cool that we really wanted to share it - surely there are other folks that need to quickly read/write encoded data to various messaging systems. We wrote https://ift.tt/3jXMFX4 for that purpose. It's like a curl for messaging systems and currently supports Kafka, RabbitMQ and GCP PubSub. It's a port from an internal tool we used when interacting with our own Kafka and RabbitMQ instances. In closing, we would love for you to check out https://batch.sh and tell us what you think. Our initial thinking is to allow folks to pump their data into us for free with 1-3 days of retention. If you need more retention, that'll require $ (we're still figuring out the pricing model). Our #1 goal right now is to chat with folks who have experience in this field and/or have experienced pain in this space. OH, one last ask - if you are data compliance savvy - we'd love to chat with you - we need to store gobs of important data that falls under all kinds of regulations and well, we could use a hand there. OK that's it! Thank you for checking us out! ~Dan & Ustin P.S. Forgot about our creds: I (Dan), spent a large chunk of my career working at data centers doing systems integration work. I got exposed to all kinds of esoteric things like how to integrate diesel generators into CMSs and automate VLAN provisioning for customers. I also learned that "move fast and break things" does not apply to data centers haha. After data centers, I went to work for New Relic, followed by InVision, Digital Ocean and most recently, Community (which is where I met Ustin). I work primarily in Go, consider myself a generalist, prefer light beers over IPAs and dabble in metal (music) production. Ustin is a physicist turned computer scientist and worked towards a PhD on distributed storage over lossy networks. He has spent most of his career working as a founding engineer at startups like Community. He has a lot of experience working in Elixir and Go and working on large, complex systems.
New top story on Hacker News: Show HN: AppleBot – Automate Things in Apple Dev Portal
Show HN: AppleBot – Automate Things in Apple Dev Portal
19 by kenny_hitcher | 0 comments on Hacker News.
19 by kenny_hitcher | 0 comments on Hacker News.
Sunday, August 16, 2020
New top story on Hacker News: Ask HN: Does Google have no way to report scam advertisements?
Ask HN: Does Google have no way to report scam advertisements?
19 by eisa01 | 4 comments on Hacker News.
I got the linked ad [1] show up on a website I visited. It’s a scam for bitcoin trading, which is presented using the same website as our national broadcaster. I tried to report it, but the only option that fit was “inappropriate” Does Google really have no way to report scams? I am afraid the reviewers will miss it, as inappropriate often refers to sexual decency This type of scam involving celebrities and fake news articles using national media layouts have been going on for more than a year to my recollection [1] https://ift.tt/2Y558I2
19 by eisa01 | 4 comments on Hacker News.
I got the linked ad [1] show up on a website I visited. It’s a scam for bitcoin trading, which is presented using the same website as our national broadcaster. I tried to report it, but the only option that fit was “inappropriate” Does Google really have no way to report scams? I am afraid the reviewers will miss it, as inappropriate often refers to sexual decency This type of scam involving celebrities and fake news articles using national media layouts have been going on for more than a year to my recollection [1] https://ift.tt/2Y558I2
New top story on Hacker News: Landmark Math Proof Clears Hurdle in Top Erdős Conjecture
Landmark Math Proof Clears Hurdle in Top Erdős Conjecture
10 by headalgorithm | 0 comments on Hacker News.
10 by headalgorithm | 0 comments on Hacker News.
Subscribe to:
Posts (Atom)