Via online business online marketing online business opportunities Stuff The Internet Says On Scalability For October 11th, 2019

Via online business online marketing online business opportunities



Via online business online marketing online business opportunities

Stuff The Internet Says On Scalability For October 11th, 2019

via  online business  online marketing  online business opportunities DateFriday, October 11, 2019 at 10:05AM

 Wake up! It’s HighScalability time:

Light is fast—or is it?

Do you like this sort of Stuff? I’d greatly appreciate yoursupport on Patreon. And I wroteExplain the Cloud Like I’m 10 for all who want to understand the cloud. On Amazon it has 57 mostly 5 star reviews (135 on Goodreads). Please consider recommending it. You’ll be a cloud hero.

Via online business online marketing online business opportunities Number Stuff:

  • 1,717,077,725: number of web servers in 2019. In 1994? 623.
  • 7,000,000,000,000: LinkedIn messages sent per day with Apache Kafka (sort of).
  • more data ever: collected by the LSST (Large Synoptic Survey Telescope) in its first year than all telescopes have ever collected—combined . It will do so for 10 years. That’s 15TB of data collected every night.
  • 3200 megapixel: LSST camera sensor, 250x better than an iphone, the equivalent of half a basketball court filled with 4k TVs to gather one raw image. 
  • 4 million: new jobs created in Africa because of investment in cell phone networks.
  • 442%: ROI running Windows workloads on AWS; 56% lower five-year cost of operations; 98% less unplanned downtime; 31% higher internal customer satisfaction; 37% lower IT infrastructure costs; 75% more efficient IT infrastructure team; 26% higher developer productivity; 32% higher gross productivity.
  • several petabytes: logs generated per hour by millions of machines at Facebook. Scribeprocesses logs with an input rate that can exceed 2.5 terabytes per second and an output rate that can exceed 7 terabytes per second. 

  • lowest: spending on tech acquisitions are the lowest quarterly level in nearly two years due to rising global uncertainty coupled with slowing economic growth. 

  • 5%-8%: per year battery energy density improvement. We expect the storage per unit mass and volume of batteries will probably plateau within 10 to 20 years. At the same time, the market penetration of lithium batteries is doubling every 4 to 5 years. 

  • 27: tech companies raised $100 million or more, taking in a total of $7.1 billion during the month of September.

  • 50%: price reduction for Intel’s Cascade Lake X-Series. 

  • 16x: Redis faster at reading JSON blobs compared to PostgreSQL.

  • $225B: worth of Google cloud.@lacker: I was working at Google when AWS launched in 2006. The internal message was, we don’t want to build a competitor. We have the technology to compete in this area, but it is fundamentally a low-margin business, whereas reinvesting in our core business is high-margin, so we should keep our infrastructure to ourselves.

  • 247,000+: Nextdoor neighborhoods.

  • $21.9B: app (App Store 65%, Google Play 35%) revenue in Q3 2019, a 23% increase. WhatsApp is #1. TikTok is #2. Messenger is #3. Facebook is #4. Instagram is #5. Mobile gaming is 74% of total revenue. 

  • 77,000: virtual neurons simulated in real-time on a 1 million processor supercomputer. 

Via online business online marketing online business opportunities  Quotable Stuff:

  • @sh: What are some famous last words in your industry? @Liv_Lanes: “How does it scale?”
  • Erin Griffith: It is now difficult for “a growth-at-all-costs company burning hundreds of millions of dollars with negative unit economics” to get funding, he said. “This is going to be a healthy reset for the tech industry.”
  • @garykmrivers: Milk delivery 25 years ago was essentially a subscription service offering products with recyclable/reusable packaging, delivered by electric vehicles. Part of me thinks that if a techie firm were to have proposed this same idea today people would think it was incredible.
  • @investing_cit: Costco is a fascinating business. You know all those groceries you buy? Yeah, they basically sell those at break even and then make all of their profit from the $60 annual membership fees. This is the key. The company keeps gross margins as low as possible.  In turn, this gives it pricing authority. In other words, you don’t even look at the price because you know it’s going to be the best. Off of merchandise, Costco’s gross margins are only 11%. Compare this to Target. Gross margins are almost 30%. Or against Walmart. About 25%. The company sells its inventory typically before it needs to pay suppliers. In other words, suppliers do what Costco tells them to do. Costco has essentially aggregated demand which it can then leverage against its suppliers in the form of payment terms. See, the DSI and DPO are basically the same.  On top of this, Costco collects cash in about 4 days, so that’s the extent of the cash conversion cycle.
  • peterwwillis: In fact, I’m going to make a very heretical suggestion and say, don’t even start writing app code until you know exactly how your whole SDLC, deployment workflow, architecture, etc will work in production. Figure out all that crap right at the start. You’ll have a lot of extra considerations you didn’t think of before, like container and app security scanning, artifact repository, source of truth for deployment versions, quality gates, different pipelines for dev and prod, orchestration system, deployment strategy, release process, secrets management, backup, access control, network requirements, service accounts, monitoring, etc. 
  • Jessica Quillin: We are likely facing a new vision for work, one in which humans work at higher levels of productivity (think less work, but more output), thanks to co-existing with robots, working side-by-side personal robots, digital assistants, or artificial intelligence tools. Rather than being bogged down by easily automated processes, humans can leverage robots to focus on more abstract, creative tasks, bringing about new innovative solutions.
  • Edsger W. Dijkstra: Abstraction is not about vagueness, it is about being precise at a new semantic level.
  • @TooMuchMe: Tomorrow, the City of Miami will vote on whether to grant a 30-year contract on light poles that will have cameras, license plate readers and flood sensors. For free. The catch: Nothing would stop the contracting company from keeping all your data and selling it to others.
  • K3wp: I used to work in the same building as [Ken Thompson]. He’s a nice guy, just not one for small talk. Gave me a flying lesson (which terrified me!) once. My father compares him to Jamie Hyneman, which is apt. Just a gruff, no-nonsense engineer with no time or patience for shenanigans
  • Richard Lawson: McDonnell recalls, wistfully, the bygone days, when a creator could directly email the guy who ran YouTube’s homepage. These days, nearly every creator I spoke to seemed haunted and awed by the platform’s fabled algorithm. They spoke of it as one would a vague god or as a scapegoat, explaining away the fading of clout or relevance.
  • @Inc: A 60-year-old founder is 3 times as likely to found a successful startup as a 30-year-old founder.
  • Andy Greenberg: Elkins programmed his tiny stowaway chip to carry out an attack as soon as the firewall boots up in a target’s data center. It impersonates a security administrator accessing the configurations of the firewall by connecting their computer directly to that port. Then the chip triggers the firewall’s password recovery feature, creating a new admin account and gaining access to the firewall’s settings. 
  • DSHR: If running a Lightning Network node were to be even a break-even business, the transaction fees would have to more than cover the interest on the funds providing the channel liquidity. But this would make the network un-affordable compared with conventional bank-based electronic systems, which can operate on a net basis because banks trust each other.
  • Marc Benioff: What public markets do is indeed the great reckoning. But it cleanses [a] company of all of the bad stuff that they have. I think in a lot of private companies these days, we’re seeing governance issues all over the place. I can’t believe this is the way they were running internally in all of these cases. They are staying private way too long.
  • Benjamin Franklin: I began now gradually to pay off the debt I was under for the printing house. In order to secure my credit and character as a tradesman, I took care not only to be in reality industrious and frugal, but to avoid all appearances to the contrary. I dressed plainly; I was seen at no places of idle diversion; I never went out a-fishing or shooting; a book, indeed, sometimes debauched me from my work, but that was seldom, snug, and gave no scandal; and, to show that I was not above my business, I sometimes brought home the paper I purchased at the stores through the streets on a wheelbarrow. Thus, being esteemed an industrious, thriving young man, and paying duly for what I bought, the merchants who imported stationery solicited my custom; others proposed supplying me with books, and I went on swimmingly.
  • @brightball: “BMW’s greatest product isn’t a car, it’s the factory.” – Best quote from #SAFeSummit #SAFe @ScaledAgile
  • @robmay: As an example, some research shows that more automation in warehouses increases overall humans working in the industry.  Why?  Because when you lower the human labor costs of a warehouse, you can put more warehouses in smaller towns that weren’t economically feasible before.  Having more automation will, initially, increase the desire for human skills like judgment, empathy, and just good old human to human interaction in some fields. The important point here is that you can’t think linearly about what will happen. It’s not a 1:1 replacement of automation taking human jobs. It is complex, and will change work in many different ways.
  • Quinn: The most consistent mistake that everyone makes when using AWS—this extends to life as well—is once people learn something, they stop keeping current on that thing. There is an entire ecosystem of people who know something about AWS, with a certainty. That is simply no longer true, because capabilities change. Restrictions get relaxed. Constraints stop applying. If you learned a few years ago that there are only 10 tags permitted per resource, you aren’t necessarily keeping current to understand that that limit is now 50.
  • @BrianRoemmele: Consider: The 1843 facsimile machine invented by Alexander Bain a clock inventor. A clock synchronize movement of two pendulums for line-by-line scanning of a message. It wasn’t until the 1980s that network effect, cost of machines made it very popular. Mechanical to digital. 
  • @joshuastager: “If the ISPs had not repeatedly sued to repeal every previous FCC approach, we wouldn’t be here today.” – @sarmorris
  • @maria_fibonacci: – Make each program do one thing well. – Expect the output of every program to become the input to another, as yet unknown, program. I think the UNIX philosophy is very Buddhist 🙂
  • @gigastacey: People love their Thermomixers so much that of the 3 million connected devices they have sold, those who use their app have a 50% conversion to a subscription. That is an insane conversion rate. #sks2019
  • eclipsetheworld: I think this quote sums up my thoughts quite nicely: “When I was a product manager at Facebook and Instagram, building a true content-first social network was the holy grail. We never figured it out. Yet somehow TikTok has cracked the nut and leapfrogged everyone else.” — Eric Bahn, General Partner at Hustle Fund & Ex Instagram Product Manager
  • Doug Messier: The year 2018 was the busiest one for launches in decades. There were a total of 111 completely successful launches out of 114 attempts. It was the highest total since 1990, when 124 launches were conducted. China set a new record for launches in 2018. The nation launched 39 times with 38 successes in a year that saw a private Chinese company fail in the country’s first ever orbital launch attempt. The United States was in second place behind China with 34 launches. Traditional leader Russia launched 20 times with one failure. Europe flew eight times with a partial failure, followed by India and Japan with seven and six successful flights, respectively.
  • John Preskill: The recent achievement by the Google team bolsters our confidence that quantum computing is merely really, really hard. If that’s true, a plethora of quantum technologies are likely to blossom in the decades ahead.
  • Quinn: What people lose sight of is that infrastructure, in almost every case, costs less than payroll.
  • Lauren Smiley: More older people than ever are working: 63% of Americans age 55 to 64 and 20% of those over 65. 
  • Sparkle: These 12 to 18 core CPU have lost most of their audience. The biggest audience for these consumer CPU was video editing and streaming. Video encoding and decoding with Nvidia NVENC is 10 times faster and now has the same or higher quality than CPU encoding. Software like OBS, Twitch studio, Handbrake, Sony Vegas now all support NVENC. The only major software suite that doesn’t support NVENC officially yet is Premiere.
  • Timothy Prickett Morgan: To move data from DRAM memory on the PIM modules to one of the adjacent DPUs on the memory chips takes about 150 picoJoules (pJ) of energy, and this is a factor of 20X lower than what it costs to move data from a DRAM chip on a server into the CPU for processing. It takes on the order of 20 pJ of energy to do an operation on that data in the PIM DPU, which is inexplicably twice as much energy in this table. The server with PIM memory will run at 700 watts because that in-memory processing does not come for free, but we also do not think that a modern server comes in at 300 watts of wall power.
  • William Stein: The supply/demand pendulum has swung away from providers in favor of customers, with various new entrants bringing speculative supply online, while the most voracious consumers remain in digestion mode. Ultimately, we believe it’s a question of when, not if hyperscale procurement cycles enter their next phase of growth, and the pendulum can swing back the other direction quickly.
  • Jen Ayers: Big-game hunters are essentially targeting people within an organization for the sole purpose of identifying critical assets for the purpose of deploying their ransomware. Hitting] one financial transaction server, you can charge a lot more for that than you could for a thousand consumers with ransomware—you’re going to make a lot more money a lot faster.
  • Eric Berger: Without the landing vision system, the rover would most likely still make it to Mars. There is about an 85% chance of success. But this is nowhere near good enough for a $2 billion mission. With the landing camera and software Johnson has led development of, the probability of success increases to 99%.
  • s32167: The headline should be “Intel urges everyone to use new type of memory that lowers performance for every CPU architecture to fix their own architecture security issues.”
  • Robert Haas: So, the “trap” of synchronous replication is really that you might focus on a particular database feature and fail to see the whole picture. It’s a useful tool that can supply a valuable guarantee for applications that are built carefully and need it, but a lot of applications probably don’t report errors reliably enough, or retry transactions carefully enough, to get any benefit.  If you have an application that’s not careful about such things, turning on synchronous replication may make you feel better about the possibility of data loss, but it won’t actually do much to prevent you from losing data.
  • Scott Aaronson: If you were looking forward to watching me dismantle the p-bit claims, I’m afraid you might be disappointed: the task is over almost the moment it begins. “p-bit” devices can’t scalably outperform classical computers, for the simple reason that they are classical computers. A little unusual in their architecture, but still well-covered by the classical Extended Church-Turing Thesis. Just like with the quantum adiabatic algorithm, an energy penalty is applied to coax the p-bits into running a local optimization algorithm: that is, making random local moves that preferentially decrease the number of violated constraints. Except here, because the whole evolution is classical, there doesn’t seem to be even the pretense that anything is happening that a laptop with a random-number generator couldn’t straightforwardly simulate. 
  • Handschuh: Adding security doesn’t happen by chance. In some cases it requires legislation or standardization, because there’s liability involved if things go wrong, so you have to start including a specific type of solution that will address a specific problem. Liability is what’s going to drive it. Nobody will do it just because they are so paranoid that they think that it must be done. It will be somebody telling them
  • Battery: The best marketing today—particularly mobile marketing—is not about providing a point solution but, instead, offering a broader technology ecosystem to understand and engage customers on their terms. The Braze-powered Whopper campaign, for instance, helped transform an app that had been primarily a coupon-delivery service into a mobile-ordering system that also offered a deeper connection to the Burger King brand.
  • Jakob: I think that we need to think of programming just like any other craft, trade, or profession with an intersection on everyday life: it is probably good to be able to do a little bit of it at home for household needs. But don’t equate that to the professional development of industrial-strength software.  Just like being able to use a screwdriver does not mean you are qualified to build a house, being able to put some blocks or lines of code together does not make you a programmer capable of building commercial-grade software.
  • @benedictevans: TikTok is introducing Americans to a question that Europeans have struggled with for 20 years: a lot of your citizens might use an Internet platform created somewhere that doesn’t know or care about your laws or cultural attitudes and won’t turn up to a committee hearing
  • Robert Pollack: So let me say something about our uniqueness, which is embedded in our DNA. Simple probabilities. Every base pair in DNA has four possible base pairs. Three billion letters long. Each position in the text could have one of four choices. So how many DNAs are there? There are four times four two-letter words in DNA, four for the first letter, four for the second—sixteen possible two-letter words. Sixty-four possible three-letter words. That is to say, how many possible human genomes are there? Four to the power 3 billion, which is to say a ridiculous, infinite number. There are only 1080 elementary particles in the universe. Each of us is precisely, absolutely unique while we are alive. And in our uniqueness, we are absolutely different from each other, not by more or less, but absolutely different.

Via online business online marketing online business opportunities Useful Stuff:

  • After 2000 years of taking things apart into smaller things, we have learned that all matter is made of molecules, and that molecules are made of atoms. Has Reductionism Run its Course? Or in the context of the cloud: Has FaaS Run Its Course? The “everything is a function” meme is a form of reductionism. And like reductionism in science FaaS reductionism has been successful, as the “business value” driven crowd is fond of pointing out. But that’s not enough when you want to understand the secrets of the universe, which in this analogy is figuring how to take the next step in building systems. Lambda is like the Large Hadron Collider in that it confirmed the standard model, but hasn’t moved us forward. At some point we need to stop looking at functions and explore using some theory driven insight. We see tantalizing bits of a greater whole as we layer abstractions on top of functions. There are event busses, service meshes, service discovery services, work flow systems, pipelines, etc.—but these are all still part of the standard model of software development. Software development like physics is stuck looking for a deeper understanding of its nature, yet we’re trapped in a gilded cage of methodological reductionism. Like for physics,  “the next step forward will be a case of theory reduction that does not rely on taking things apart into smaller things.”
  • SmashingConf Freiburg 2019 videos arenow available. You might like The Anatomy Of A Click
  • There’s a lot of energy and ideas at serverlessconf:
    • If you’re looking for the big picture: ServerlessConf NYC 2019: everything you missed
    • @jeremy_daly: Great talk by @samkroon. Every month, @acloudguru uses 240M Lambda calls, 180M API Gateway calls, and 90TB of data transfer through CloudFront. Total cost? ~$2,000 USD. #serverless #serverlessftw #Serverlessconf
    • @ryans140: We’re in a similar situation.  3 environments,  60+ microservices,  serverless datakake. $1400 a month.   Down from $12k monthly in a  vm based datacenter.
    • @gitresethard: This is a very real feeling at #Serverlessconf this year. There’s a mismatch between the promise of focusing on your core differentiators and the struggle with tooling that hasn’t quite caught up.
    • @hotgazpacho: “Kubernetes is over-hyped and elevating the least-interesting part of your application. Infrastructure should be boring.” – @lindydonna 
    • @QuinnyPig: Lambda: “Get a file” S3: “Here it is.” There’s a NAT in there.  (If it’s the Managed NAT gateway you pay a 4.5¢ processing fee / @awscloud tax on going about your business.) #serverlessconf
    • @ben11kehoe: Great part the @LEGO_Group serverless story: they started with a single Lambda, to calculate sales tax. Your journey can start with a small step! #Serverlessconf
    • @ryanjonesirl: Thread about @jeremy_daly talk at #Serverlessconf #Serverless is (not so) simple Relational and Lambda don’t mix well.
    • @jssmith: Just presented the Berkeley View on #Serverless at #Serverlessconf
      • Serverless is more than FaaS
      • Cloud programming simplified
      • Next phase in cloud evolution
      • Using servers will seem like using assembly language
      • Serverless computing will server just about every use case
      • Serverless computing bill will converge to the serverful cost
      • Machine learning will play an important role in optimizing execution
      • Serverless computing will embrace heterogeneous hardware (GPU, TPU, etc) 
      • Serverful cloud computing will decline relative to serverless computing
  • Awesome writeup. Lots to learn on how to handle 4,200 Black Friday orders per minute, especially if you’re interested in running an ecommerce site on k8s in AWS using microservices. Building and running application at scale in Zalando
    • In our last Black Friday, we broke all the records of our previous years, and we had around 2 million orders. In the peak hour, we reached more than 4,200 orders per minute.
    • We have come from a long run, we have migrated from monolith to microservices around 2015. Nowadays, in 2019, we have more than 1,000 microservices. Our current tech organization is composed from more than 1,000 developers, and we are more than 200 teams. Every team is organized strategically to cover a customer journey, and also a business thing. Every team can also have different team members with multidisciplinary skills like frontend, backend data scientists, UX, researcher, product, whatever is needed that our team needs to fulfill.
    • Since we have all of these things, we also have end-to-end responsibility for the services that every team has to manage…We also found out that it’s not easy that every team do their way, so we end up having standard processes of how we develop software. This was enabled by the tools that our developer productivity team provides us. Every team can easily start a new project, can set it up, can start coding, build it, test it, deploy it, monitor it, and so on, in all the software development cycle
    • All our microservices are run in AWS and Kubernetes. When we migrated from monolith to microservices, we also migrated to the cloud. We start to use AWS like EC2 instances and cloud formations…All our microservices, not only checkout, but also lambda microservices are running in containers. Every microservice environment is obstructed from our infrastructure.
    • After this, we also have frontend fragments, which are frontend microservices. Frontend microservices are services that provide server-side rendering of what we call fragments. A fragment is a piece of a page, for example, a header, a body, a content, or a footer. You can have one page where you can see one thing, but every piece can be something that different teams owned.
    • Putting it all together, we do retries of operations with exponential back off. We wrap operations with the circuit breaker. We handle failures with fallbacks when possible. Otherwise, we have to make sure to handle the exceptions to avoid unexpected errors.
    • Every microservice that we have has the same infrastructure. We have a load balancer who handles the incoming request. Then this distributes the request through the replication of our microservice in multiple instances, or if we are using Kubernetes in multiple ports. Every instance is running with a Zalando-based image. This Zalando-based image contains a lot of things that are needed to be compliant, to be secure, to make sure that we have the right policies implemented because we are a serious company, and because we take seriously our business
    • What we didn’t know is that when we have more instances, it also means that we have more database connections. Before, even if we were having 26 million active customers using the website in different patterns, it was not a problem. Now, we have 10 times more instances creating connections to our Cassandra database. The poor Cassandra was not able to handle all of these connections.
    • Consider doing rollouts, consider having the same capacity for the current traffic that you have. Otherwise, your service is likely to become unavailable, just because you’ve introduced a new feature, but you have to make sure that this is also handled.
    • For our Black Friday preparation, we have a business forecast for tellers, we want to make this and that amount of orders, then we also have load testing of real customer journey
    • Then all the services involved in all this journey are identified, then we had to load testing in top of this. With this week, we were able to do capacity planning, so we could scale our service accordingly, and we could also identify bottlenecks, or things that we might need to fix for Black Friday.
    • For every microservice that is involved in Black Friday, we also have a checklist where we review, is the architecture and dependencies reviewed? Are the possible points of failures identified and mitigated? Do we have reliability patterns for all our microservices that are involved? Are configurations adjustable without need of deployment?
    • we are one company doing Black Friday. Then we have other 100 companies or more also doing Black Friday. What happened to us already in one Black Friday, I think, or two, was that AWS run out of resources. We don’t want to make a deployment and start new instances because we might get into the situation where we get no more resources in AWS
    • In the final day of Black Friday, we have a situation room. All teams that are involved in the services that are relevant for the Black Friday are gathered in one situation room. We only have one person per team. Then we are all together in this space where we monitor, and we support each other in case there is an incident or something that we need to handle
  • Videos from CppCon 2019 arenow available. You might like Herb Sutter “De-fragmenting C++: Making Exceptions and RTTI More Affordable and Usable
  • Introducing SLOG: Cheating the low-latency vs. strict serializability tradeoff: Bottom line: there is a fundamental tradeoff between consistency and latency. And there is another fundamental tradeoff between serializability and latency… it is impossible to achieve both strict serializability and low latency reads and writes…By cheating the latency-tradeoff, SLOG is able to get average latencies on the order of 10 milliseconds for both reads and writes for the same geographically dispersed deployments that require hundreds of milliseconds in existing strictly serializable systems available today. SLOG does this without giving up strict serializability, without giving up throughput scalability, and without giving up availability (aside from the negligible availability difference relative to Paxos-based systems from not being as tolerant to network partitions). In short, by improving latency by an order of magnitude without giving up any other essential feature of the system, an argument can be made that SLOG is strictly better than the other strictly serializable systems in existence today.
  • Data races are very hard to find. Usually the way you find them is a late night call when a system locks up for no discernable reason. So it’s remarkable Google’s Kernel Concurrency Sanitizer (KCSAN) found over 300  data race conditions within the Linux kernel. Here’sthe announcement.
  • How much will you save running Windows on AWS? A lot says IDC inThe Infrastructure Cost and Staff Productivity Benefits of Running High-Performing Windows Workloads in the AWS Cloud: Based on interviews with these organizations, IDC quantifies the value they will achieve by running Windows workloads on AWS at an average of $157,300 per 100 users per year ($6.59 million per organization)…IT infrastructure cost reductions: Study participants reduce costs associated with running on-premises environments and benefit from more efficient use of infrastructure and application licenses…IT staff productivity benefits: Study participants reduce the day-to-day burden on IT…Risk mitigation — user productivity benefits: Study participants minimize the operational impact of unplanned application outages…Business productivity benefits: Study participants better address business opportunities and provide their employees with higher-performing and more timely applications and features infrastructure, database, application management, help desk, and security teams and enable application development teams to work more effectively.
    • Food and beverage organization: “We definitely go ‘on the cheap’ to start with AWS because it’s easy just to add extra storage per server instance in seconds. We will spin up a workload with what we feel is the minimum, and then add to it as needed. It definitely has putus in a better place to utilize resources regarding  services and infrastructure.
    • Healthcare organization: Licensing cost efficiencies was one of the reasons we went to the cloud with AWS. The way that you collaborate these licensing contracts through AWS for software licenses versus having to buy the licenses on our own has already been more cost effective for us. We’re saving 10%.
  • A fun approach to learning SQL. NUKnightLab/sql-mysteries: There’s been a Murder in SQL City! The SQL Murder Mystery is designed to be both a self-directed lesson to learn SQL concepts and commands and a fun game for experienced SQL users to solve an intriguing crime.
  • Caching improves your serverless application’s scalability and performance. It helps you keep your cost in check even when you have to scale to millions of users. All you need to know about caching for serverless applications: Lambda auto-scales by traffic. But it has limits… if your traffic is very spiky then the 500/min limit will be a problem…Caching improves response time as it cuts out unnecessary roundtrips…My general preference is to cache as close to the end-user as possible…Where should you implement caching? Route53 as the DNS.CloudFront as the CDN. API Gateway to handle authentication, rate limiting and request validation. Lambda to execute business logic. DynamoDB as the database.
  • To quote the Good Place, “This is forked.” But in a good way. A Multithreaded Fork of Redis That’s 5X Faster Than Redis
    • In regards to why fork Redis in the first place,KeyDBhas a different philosophy on how the codebase should evolve. We feel that ease of use, high performance, and a “batteries included” approach is the best way to create a good user experience. While we have great respect for the Redis maintainers it is our opinion that the Redis approach focusses too much on simplicity of the code base at the expense of complexity for the user. This results in the need for external components and workarounds to solve common problems.
    • KeyDB works by running the normal Redis event loop on multiple threads. Network IO, and query parsing are done concurrently. Each connection is assigned a thread on accept(). Access to the core hash table is guarded by spinlock. Because the hashtable access is extremely fast this lock has low contention. Transactions hold the lock for the duration of the EXEC command. Modules work in concert with the GIL which is only acquired when all server threads are paused. This maintains the atomicity guarantees modules expect.
    • @kellabyte: I’ve been saying for years the architecture of Redis has been poorly designed in it’s single threaded nature among several other issues. KeyDB is a multi-threaded fork that attempts to fix some of these issues and achieves 5x the perf. Antirez has convinced a lot of people that whatever he says must be true 😛 Imaging running 64 instances of redis on a 64 core box? Oh god haha…I do. Having built Haywire up to 15 million HTTP requests/second using the same architecture myself I believe the numbers. It’s good engineering.
  • Frugal computing: Companies care about cheap computing…How can we trade-off speed with monetary cost of computing?…With frugal computing, we should try to avoid the cost of state synchronization as much as possible. So work should be done on one machine if it is cheaper to do so and the generous time budget is not exceeded…Memory is expensive but storage via local disk is not. And time is not pressing. So we can consider out-of-core execution, juggling between memory and disk…Communication costs money. So batching communication and trading off computation with communication…We may then need schemes for data-naming (which may be more sophisticated then simple key), so that a node can locate the result it needs in S3 instead of computing itself. This can allow nodes to collaborate with other nodes in an asynchronous, offline, or delay-tolerant way…In frugal computing, we cannot afford to allocate extra resources for fault-tolerance, and we need to do in a way commensurate with the risk of fault and the cost of restarting computation from scratch. Snapshots that are saved for offline collaboration may be useful for building frugal fault-tolerance.
  • A good summary from DevSecCon Seattle 2019 Round Up
  • Corruption is a work around, it’s a utility in a place where there are fewer better options to solve a problemInnovation is the antidote to corruption~ Corruption is not the problem hindering our development. In fact, conventional thinking on corruption and its relationship to development is not only wrong it’s holding many poor countries back…many programs fail to reduce corruption because we have the equation backwards. Societies don’t develop because they’ve reduced corruption, they are able to reduce corruption because they’ve developed. And societies develop through investment in innovation…there’s a relationship between scarcity and corruption, in most poor countries way too many basic things are scarce…this creates the perfect breeding ground for corruption to occur…investing in businesses that make things affordable and accessible to more people attacks this scarcity and creates the revenues for governments to reinvest in their economies. When this happens on a country wide level it can revolutionize nations…as South Korea became prosperous it was able to transition from an authoritarian government to a democratic government and has been able to reinvest in building its institutions and this has payed off…what we found when we looked at most prosperous countries today they were able to reduce corruption as they became prosperous, not before.
  • My take on: Percona Live Europe and ProxySQL Technology Day: It comes without saying that MySQL was the predominant and the more interesting tracks were there. This not because I come from MySQL, but because the ecosystem was helping the track to be more interesting. Postgres was having some interesting talk, but let us say clearly, we had just few from the community. Mongo was really low in attendee. The number of attendees during the talks and the absence of the MongoDb community was clearly indicating that the event is not in the are of interest of the MongoDB utilizers.
  • Put that philosophy degree to work. Study some John Stuart Mill and you’re ready for a job in AI. What am I talking about? Peter Norvig inArtificial Intelligence: A Modern Approach talks about how AI started out by defining AI as maximize expected utility; just give us the utility function and we have all these cool techniques on how optimizing them. But now we’re saying maybe the optimization part is the easy part and the hard part is deciding what is my utility function. What do we want as a society? What is utility? Utilitarianism is filled with just these kind of endless debates. And as usual when you dive deep absolutes fade away and what remains are shades of grey. As of yet there’s no utility calculus. So if you’re expecting AI to solve life’s big questions it turns out we’ll need to solve them before AI can.
  • You too can use these techniques. Walmart Labs on Here’s What Makes Apache Flink scale:
    • I have been using Apache Flink in production for the last three years, and every time it has managed to excel at any workload that is thrown at it. I have run Flink jobs handling datastream at more than 10 million RPM with not more than 20 cores.
    • Reduce Garbage Collection – Flink takes care of this by managing memory itself.
    • Minimize data transfer – several mapping and filter transformations are done sequentially in a single slot. This chaining minimizes the sharing of data between slots and multiple JVM processes. As a result, jobs have a low network I/O, data transfer latencies, and minimal synchronization between objects.
    • Squeeze your bytes – To avoid storing such heavy objects, Flink implements its serialization algorithm, which is much more space-efficient.
    • Avoid blocking everyone – Flink revamped its network communications after Flink 1.4. This new policy is called credit-based flow control. Receiver sub-tasks announce how many buffers they have left to sender sub-tasks. When a sender becomes aware that a receiver doesn’t have any buffers left, it merely stops sending to that receiver. This helps in preventing the blocking of TCP channels with bytes for the blocking sub-task.
  • A good experience report fromThe Full Stack Fest Experience 2019
  • Places to intervene in a system: 12. Constants, parameters, numbers (such as subsidies, taxes, standards); 11. The sizes of buffers and other stabilizing stocks, relative to their flows; 10. The structure of material stocks and flows (such as transport networks, population age structures); 9. The lengths of delays, relative to the rate of system change; 8. The strength of negative feedback loops, relative to the impacts they are trying to correct against; 7. The gain around driving positive feedback loops; 6. The structure of information flows (who does and does not have access to information); 5. The rules of the system (such as incentives, punishments, constraints); 4. The power to add, change, evolve, or self-organize system structure; 3. The goals of the system; 2. The mindset or paradigm out of which the system — its goals, structure, rules, delays, parameters — arises; 1. The power to transcend paradigms.
  • The big rewrite can work, but perhaps the biggest lesson is big design up front is almost always a losing strategy. Why we decided to go for the Big Rewrite: We used to be heavily invested into Apache Spark – but we have been Spark-free for six months now…One of our original mistakes (back in 2014) had been that we had tried to “future-proof” our system by trying to predict our future requirements. One of our main reasons for choosing Apache Spark had been its ability to handle very large datasets (larger than what you can fit into memory on a single node) and its ability to distribute computations over a whole cluster of machines4. At the time, we did not have any datasets that were this large. In fact, 5 years later, we still do not…With hindsight, it seems obvious that divining future requirements is a fool’s errand. Prematurely designing systems “for scale” is just another instance of premature optimization…We do not need a distributed file system, Postgres will do…We do not need a distributed compute cluster, a horizontally sharded compute system will do…We do not need a complicated caching system, we can simply cache whole datasets in memory instead…We do not need cluster-wide parallelism, single-machine parallelism will do…We do not need to migrate the storage layer and the compute layer at the same time, we can do one after the other…Avoid feature creep…Test critical assumptions early…Break project up into a dependency tree…Prototype as proof-of-concept…Get new code quickly into production…Opportunistically implement new features…Use black-box testing to ensure identical behavior…Build metrics into the system right from the start…Single-core performance first, parallelism later.
  • Interesting mix of old and new. What is the technology behind Nextdoor in 2019? 
    • Deploying to production 12–15 times. Inserting billions of rows to our Postgres and DynamoDB tables. Handling millions of user sessions concurrently. 
    • Django Framework for web applications; NGINX and uWSGI to serve our Python 3 code, served behind an Amazon Elastic Load Balancer; Conda to manage our Python environments;M yPy to add type safety to the codebase.
    • PostgreSQL is the database. Horizontally scaling uses a combination of application-specific read replicas as well as a connection pooler (PGBouncer); and Load Balancer is used as a custom microservice in front the databases; DynamoDB for documents that need fast retrieval.
    • Memcached and HAProxy help with performance; Redis via ElastiCache is used to use the right data type for the job; CloudFront as the CDN; SQS for job queues.
    • Jobs consumed off SQS using a custom Pythong based distributed job processor called Taskworker. They built a cron type system on top of Taskworker. 
    • Microservices are written in Go and use gorilla/mux as the router. Zookeeper for service configuration. Communicating between services uses a mix of SQS, Apache Thrift and JSON APIs. Storage is mostly DynamoDB. 
    • Most data processing is done via AirFlow, which aggregates PostgreSQL data to S3 that then loads it into Presto.
    • For Machine Learning: Scikit-Learn, Keras, and Tensorflow.
    • Services are deployed as Docker images, using docker-compose for local development, ECS / Kubernetes for prod/staging environments. M
    • Considering moving everything to k8s in the future.
    • Python deployments are done via Nextdoor/conductor, a Go App in charge of continuously releasing our application via Trains -a group of commits to be delivered together. Releases are made using CloudFormation via Nextdoor/Kingpin.
    • React and Redux on the frontend speaking GraphQL and JSON APIs. 
    • PostGIS extension is used for spatial operations using libraries like GDAL and GEOS for spatial algorithms and abstractions, and tools like Mapnik and the Google Maps API to render map data.
    • Currently in the process of developing a brand new data store and custom processing pipeline to manage the high volume of geospatial data expected to store (1B+ rows) as they expand internationally.
  • How LinkedIn customizes Apache Kafka for 7 trillion messages per day
    • At LinkedIn, some larger clusters have more than 140 brokers and host one million replicas in a single cluster. With those large clusters, we experienced issues related to slow controllers and controller failure caused by memory pressure. Such issues have a serious impact on production and may cause cascading controller failure, one after another. We introduced several hotfix patches to mitigate those issues—for example, reducing controller memory footprint by reusing UpdateMetadataRequest objects and avoiding excessive logging.
    • As we increased the number of brokers in a cluster, we also realized that slow startup and shutdown of a broker can cause significant deployment delays for large clusters. This is because we can only take down one broker at a time for deployment to maintain the availability of the Kafka cluster. To address this deployment issue, we added several hotfix patches to reduce startup and shutdown time of a broker (e.g., a patch to improve shutdown time by reducing lock contention). 

Via online business online marketing online business opportunities Soft Stuff:

  • Hydra (article): a framework for elegantly configuring complex applications. Hydra offers an innovative approach to composing an application’s configuration, allowing changes to a composition through configuration files as well as from the command line.
  • uttpal/clockwork (article): a general purpose distributed job scheduler. It offers you horizontally scalable scheduler with atleast once delivery guarantees. Currently supported task delivery mechanism is kafka, at task execution time the schedule data is pushed to the given kafka topic. 
  • linkedin/kafka: the version of Kafka running at LinkedIn. Kafka was born at LinkedIn. We run thousands of brokers to deliver trillions of messages per day. We run a slightly modified version of Apache Kafka trunk. This branch contains the LinkedIn Kafka release.
  • serverlessunicorn/ServerlessNetworkingClients (article): Serverless Networking adds back the “missing piece” of serverless functions, enabling you to perform distributed computations, high-speed workflows, easy to use async workers, pre-warmed capacity, inter-function file transfers, and much more.

    Via online business online marketing online business opportunities Pub Stuff: 

    • LSST Active Optics System Software Architecture:  In this paper, we describe the design and implementation of the AOS. More particularly, we will focus on the software architecture as well as the AOS interactions with the various subsystems within LSST.
    • Content Moderation for End-to-End Encrypted Messaging: I would like to reemphasize the narrow goal of this paper: demonstrating that forms of content moderation may be technically possible for end-to-end secure messaging apps, and that enabling content moderation is a different problem from enabling law enforcement access to content. I am not yet advocating for or against the protocols that I have described. But I do see enough of a possible path forward to merit further research and discussion.
    • SLOG: Serializable, Low-latency, Geo-replicated Transactions (article): For decades, applications deployed on a world-wide scale have been forced to give up at least one of (1) strict serializability (2) low latency writes (3) high transactional throughput. In this paper we discuss SLOG: a system that avoids this tradeoff for workloads which contain physical region locality in data access. SLOG achieves high-throughput, strictly serializable ACID transactions at geo-replicated distance and scale for all transactions submitted across the world, all the while achieving low latency for transactions that initiate from a location close to the home region for data they access. Experiments find that SLOG can reduce latency by more than an order of magnitude relative to state-of-the-art strictly serializable geo-replicated database systems such as Spanner and Calvin, while maintaining high throughput under contention.
    • FCC-hh: The Hadron Collider: This report contains the description of a novel research infrastructure based on a highest-energy hadron collider with a centre-of-mass collision energy of 100 TeV and an integrated luminosity of at least a factor of 5 larger than the HL-LHC. It will extend the current energy frontier by almost an order of magnitude. The mass reach for direct dis- covery will reach several tens of TeV, and allow, for example, the production of new par- ticles whose existence could be indirectly exposed by precision measurements during the earlier preceding e+e− collider phase.

    Read More

    Be the first to comment on "Via online business online marketing online business opportunities Stuff The Internet Says On Scalability For October 11th, 2019"

    Leave a comment

    Your email address will not be published.