News AggregatorBuilding Reactive Microservices With Spring WebFlux on KubernetesAggregated on: 2025-10-30 14:10:57 Migrating from a monolithic Java 8 system to a reactive microservice architecture on Kubernetes allowed us to dramatically improve performance and maintainability. In this article, I’ll share our journey, key Spring Cloud Kubernetes features we adopted, the challenges we faced during development, and the lessons we learned along the way. Business Logic We have a data processing logic that streams information into S3 storage using Kafka, Spark Streaming, and Iceberg. Initially, we encountered multiple challenges, such as file optimization issues and Spark’s unpredictable memory behavior. After addressing these issues, we achieved significant cost savings. Once the insert service was completed, we needed to select an appropriate search engine service. We chose Trino as it fulfilled the needs of our data science department. We also serve customers who perform operations on our S3 data, which can result in high system load. Before this modernization, our platform ran on an old monolithic architecture built with Java 8, which created several performance and maintenance challenges. View more...Improving Developer Productivity With End-to-End GenAI EnablementAggregated on: 2025-10-30 13:10:56 This is a very common scenario that every developer can relate to — I am focused on a feature, and suddenly my project buddy requests a PR review or asks for help when a test case is failing. Now, I need to context-switch to help my buddy, or the code review will be delayed. Every engineering team faces the same bottlenecks — context switching, boilerplate work, delayed code reviews, and slow onboarding. The goal is to improve developer enablement and boost productivity through automation. Generative AI amplifies that goal. From writing user stories to generating test cases, GenAI can automate repetitive tasks and provide real-time guidance. But the challenge is to connect all those capabilities cohesively rather than treat them as isolated tools. View more...How to Get a Frequency Table of a Categorical Variable as a Data FrameAggregated on: 2025-10-30 12:10:57 Categorical data is data with a predefined set of values. Using “Child,” “Adult,” or “Senior” instead of a person's age as a number is one example of age categorization. However, before using categorical data, one must know about various forms of categorical data First of all, categorical data may or may not be defined in an order. To say that the size of a box is small, medium, or large means that there is an order described as small < medium < large. The same does not hold for, say, sports equipment, which could also be categorial data, but differentiated by names like dumbbell, grippers, or gloves; that is, you can order the items on any basis. Those that can be ordered are known as “ordinal” while those where there is no such ordering are “nominal” in nature. View more...Building a New Testing Mindset for AI-Powered Web AppsAggregated on: 2025-10-30 11:10:56 The technology landscape is undergoing a profound transformation. For decades, businesses have relied on traditional web-based software to enhance user experiences and streamline operations. Today, a new wave of innovation is redefining how applications are built, powered by the rise of AI-driven development. However, as leaders adopt AI, a key challenge has emerged: ensuring its quality, trust, and reliability. Unlike traditional systems with clear requirements and predictable outputs, AI introduces complexity and unpredictability, making quality assurance (QA) both more challenging and more critical. Business decision-makers must now rethink their QA strategy and investments to safeguard reputation, reduce risk, and unlock the full potential of intelligent solutions. View more...From Autocomplete to Co-Creation: How AI Changes Developing/Debugging Workflows in EngineeringAggregated on: 2025-10-29 19:25:56 The Shift to Co-Creation We are in the middle of a new era of software engineering, where AI coding assistants are no longer just autocomplete helpers but valuable collaborators in the development and debugging process. These tools can speed up the creation of scripts, help navigate unfamiliar languages, and reduce the time spent on repetitive tasks. Yet, the engineer’s role remains central: applying expertise, understanding the problem space, and ensuring solutions are accurate, secure, and effective. AI acts as a helping hand that makes the process of creation faster. In this article, I will share several real-time examples to show how AI assistants are changing development and debugging workflows, from scripting with unfamiliar languages to working with complex APIs and debugging. View more...Emerging Patterns in Large-Scale Event-Driven AI SystemsAggregated on: 2025-10-29 18:25:56 Modern distributed systems are increasingly being transformed by event-driven architectures (EDA) and the integration of artificial intelligence (AI). Organizations across FinTech, e-commerce, and IoT domains are moving from static request–response models to asynchronous, event-driven systems capable of processing billions of transactions in near real time. The traditional AI pipeline, train → deploy → infer, is designed for batch use and is effective when insights can wait. However, there are domains in which decisive action must occur immediately, for example, fraud detection, IoT telemetry, or autonomous navigation. Waiting would be an unacceptable risk. View more...ZEISS Demonstrates the Power of Scalable Workflows With Ampere® Altra® and SpinKubeAggregated on: 2025-10-29 17:25:56 The Challenge The cost of maintaining a system capable of processing tens of thousands of near-simultaneous requests, but which spends greater than 90 percent of its time in an idle state, cannot be justified. Containerization promised the ability to scale workloads on demand, which includes scaling down when demand is low. Maintaining many pods among a plurality of clusters just so the system doesn’t waste time in the upscaling process contradicts the point of workload containerization. The Solution Fermyon produces a platform called SpinKube that leverages WebAssembly (WASM), originally created to execute small elements of bytecode in untrusted web browser environments, as a means of executing small workloads in large quantities in Kubernetes server environments. Because WASM workloads are smaller and easier to maintain, pods can be spun up just-in-time as network demand rises without consuming extensive time in the process. And because WASM consists of pre-compiled bytecode. It can be executed on server platforms powered by Ampere® Altra® without all the multithreading and microcode overhead that other CPUs typically bring to their environments — overhead that would, in less compute-intensive circumstances such as these, be unnecessary anyway. View more...Make Static Sites Feel Dynamic With APIs Only (No Backend Needed)Aggregated on: 2025-10-29 16:25:56 A static site does not have to feel frozen. With a bit of JavaScript, a static page can request data from an API and update the page on the fly. That is the whole idea behind an API-only approach: HTML, CSS, and JavaScript live on a CDN, the browser calls APIs for content, and the page updates itself. Why should teams care? It is fast, cheap, and simple. Static files load from a CDN, deploys are trivial, and scale happens without heavy servers. It also works for real sites, like a blog fed by a headless CMS API, a product grid powered by a commerce API, or a contact form that posts to a forms service. View more...End of Static Knowledge Bases? How MCP Enables Live RAGAggregated on: 2025-10-29 15:10:56 There's a secret about production RAG systems that nobody talks about: the hardest part isn't building them — it's keeping them updated. Companies spend weeks curating documents, tuning embeddings, and perfecting their retrieval pipelines. Everything works beautifully at launch. Then reality hits. Prices change. Policies update. Products get renamed. Within weeks, the knowledge base is serving confidently wrong answers based on outdated information. View more...Series (2/4): Toward a Shared Language Between Humans and Machines — From Multimodality to World Models: Teaching Machines to ExperienceAggregated on: 2025-10-29 14:10:56 What if the key to a shared language lay in experience itself? Researchers are now exploring approaches that connect text with images, sounds, and interactions within a three-dimensional world. Sensorimotor grounding, multimodal perception, and world models, all these paths aim to give machines the kind of anchoring they still so painfully lack. View more...A Developer's Practical Guide to Support Vector Machines (SVM) in PythonAggregated on: 2025-10-29 13:10:56 Support Vector Machines (SVMs) are one of the most powerful and versatile supervised machine learning algorithms. Initially famous for their high-performance "out of the box," they are capable of performing both linear and non-linear classification, regression, and outlier detection. For classification tasks, the core idea behind SVM is to find the optimal hyperplane that best separates the different classes in the feature space. View more...Debugging a Spark Driver Out of Memory (OOM) Issue With Large JSON Data ProcessingAggregated on: 2025-10-29 12:10:56 As a data engineer, I recently encountered a challenging scenario that highlighted the complexities of Apache Spark memory management and Spark internal processing. Despite working with what seemed like a moderate dataset (25 GB), I experienced a driver Out of Memory (OOM) error that halted my data replication job. In this article, I will discuss Spark's internal processing complexity and memory management that can help us build a resilient data replication solution. View more...HSTS Beyond the Basics: Securing AI Infrastructure and Modern Attack VectorsAggregated on: 2025-10-29 11:10:56 It all started while I was working with a colleague on web security. I heard that their team is enabling HSTS as part of their Black Friday security upgrades to their website. The first question that popped into my mind is, why do you require HSTS if there is HTTP/2 and HTTP/3? You can read my article on Hackernoon to understand the basics of HSTS. For starters, HTTP Strict Transport Security (HSTS) is a web security policy mechanism that helps protect websites against protocol downgrade attacks and cookie hijacking. Introduced in 2012 as RFC 6797, HSTS has become a critical component of modern web security infrastructure, ensuring that browsers communicate with web servers exclusively over secure HTTPS connections. But as AI systems grow and move to production in enterprises, HSTS would become critical for protecting machine learning pipelines, API endpoints, and model deployments. Let's explore advanced use cases and how HSTS principles apply to AI security. View more...Building Secure Software: Integrating Risk, Compliance, and TrustAggregated on: 2025-10-28 19:25:55 This paper outlines a practical approach to secure software engineering that brings together: Static and Dynamic Application Security Testing (SAST & DAST) Information Security Risk Assessment (ISRA) Software Composition Analysis (SCA) Continuous Vulnerability Management Measuring Security Confidence (MSC) framework OWASP Top 10 secure coding standards It also examines how regulations like the General Data Protection Regulation (GDPR) and the upcoming EU Cyber Resilience Act (CRA) are changing expectations around secure-by-design software and lifecycle accountability. View more...Building Cloud Ecosystems With Autonomous AI Agents: The Future of Scalable Data SolutionsAggregated on: 2025-10-28 18:25:55 AI agents are a reality now and are one of the key research goals for AI companies and research labs. These agents automate monotonous and complicated workflows within cloud environments. They are able to enhance human functionalities in code generation and debugging. They improve productivity by reducing manual efforts for creative and higher-level thinking, while the AI agents do what they do best. With this, AI agents are evolving cloud and data systems. Scalability is maximized and efficiency is realized with their implementation because humans are finally getting the time to revolutionize, while AI is doing the tedious work, optimizing resources, predicting problems, and tailoring solutions. They can even detect errors quickly and make decisions based on data. View more...Unlocking Scalable Data Lakes: Building With Apache Iceberg, AWS Glue, and S3Aggregated on: 2025-10-28 17:25:55 Introduction: The Pain of Traditional Data Lakes Over the last decade, cloud object storage (Amazon S3, Azure Blob, Google Cloud Storage) has become the de facto substrate for data lakes. The promise was alluring: cheap, durable, infinitely scalable storage with a “store first, model later” mindset. But in practice, traditional data lakes quickly turned into “data swamps.” Engineers face recurring issues: View more...Optimizing Search: A Patent-Backed Approach to Perceived SpeedAggregated on: 2025-10-28 16:25:55 So, imagine it’s a Friday night after a long week. The kids are finally asleep, and you’re ready to unwind with the new season of Stranger Things. You open the Netflix app, select that banner on the home page, and press play! And then you see that dreaded loading circle that just wouldn’t go away. Why? View more...Production-Ready Multi-Agent Systems: From Theory to Enterprise DeploymentAggregated on: 2025-10-28 15:25:55 Your single AI agent is about to become obsolete. While you're debugging prompt chains, your competitors are deploying agent teams that coordinate like human organizations — achieving 40% cost reductions and 3x faster execution. This guide reveals the production patterns that separate the 20% of successful multi-agent deployments from the 80% that fail. You'll learn why the supervisor/worker pattern dominates, how evaluator agents prevent million-dollar mistakes, and what Uber, LinkedIn, and Klarna learned the hard way. The $5.4 Billion Reality Check Something fundamental shifted in 2024. The AI agent market exploded to $5.4 billion, with the majority of enterprises deploying multi-agent systems. But here's the uncomfortable truth: while everyone talks about agents, most implementations are elaborate prompt chains pretending to be intelligent systems. View more...Amazon Bedrock Guardrails for GenAI ApplicationsAggregated on: 2025-10-28 14:25:55 Amazon Bedrock Guardrails enable you to implement safeguards and enforce responsible AI policies for your generative AI applications, tailored to specific use cases. With Guardrails, you create multiple tailored configurations and apply them across different foundation models, ensuring a consistent user experience and standardized safety controls across all your generative AI applications. Guardrails allow you to configure denied topics to prevent undesirable subjects from being discussed and content filters to block harmful content in both input prompts and model responses. Guardrails can be used with text-only foundation models. View more...Data Migration in Software Modernization: Balancing Automation and Developers’ ExpertiseAggregated on: 2025-10-28 13:25:55 When business owners think about modernizing a legacy application, they often focus on the most visible part: a sleek new user interface. However, the real challenge often lies beneath the surface. It’s a data migration strategy. Moving data from an outdated system isn’t just a simple copy-paste job. It requires deep planning and expert execution, while automated data migration tools promise speed and cost savings, they are not a silver bullet. In this article, we’ll explore why automated tools alone aren’t enough, when developer expertise remains irreplaceable, and how a hybrid approach can save time and money. How Databases Evolve During Legacy Software Modernization When we talk about how data usually changes in the context of modernizing legacy software, we typically mean the following key processes. View more...Understand and Optimize AWS Aurora Global DatabaseAggregated on: 2025-10-28 12:25:55 AWS Aurora Database supports a global multi-region setup, including a primary and a secondary region. When engineering with Aurora Global, the default settings are great, but understanding all the available configuration options and how those come together saves time and effort. This article explains Global Write Forwarding and its effects in detail, which is a very handy setting that lets applications read and write from applications running on both primary and secondary regions. Note that this article is not about Aurora DSQL, which is a different service that supports active-active setup out of the box. Aurora Defaults Aurora Global Database sets up a writer and a reader in the primary region, plus a reader and a standby writer instance in the secondary region. The standby writer will be promoted to a writer during a region failover, where the secondary region becomes primary. View more...Unpacking MCP Security: What You Need to KnowAggregated on: 2025-10-28 11:25:55 Why the Model Context Protocol is powerful — and why it demands serious attention from security teams Ever since the model context protocol (MCP) was developed and open-sourced by Anthropic in late 2024, it has become the go-to standard for linking large language models (LLMs) with external tools, APIs, and data sources. MCP simplifies and standardizes how models interact with systems, making it easier to build AI agents. Too, building a dynamic tool using AI agents is significantly easier. View more...Writing (Slightly) Cleaner Code With Collections and OptionalsAggregated on: 2025-10-27 19:10:55 Kilo is an open-source project for creating and consuming RESTful and REST-like web services in Java. Among other things, it includes the Collections and Optionals classes, which are designed to help simplify code that depends on collection types and optional values, respectively. Both are discussed in more detail below. Collections Kilo’s Collections class provides a set of static utility methods for declaratively instantiating list, map, and set values: View more...Mastering Fluent Bit: Top Tip Using Telemetry Pipeline Parsers for Developers (Part 8)Aggregated on: 2025-10-27 18:10:55 This series is a general-purpose getting-started guide for those of us wanting to learn about the Cloud Native Computing Foundation (CNCF) project Fluent Bit. Each article in this series addresses a single topic by providing insights into what the topic is, why we are interested in exploring that topic, where to get started with the topic, and how to get hands-on with learning about the topic as it relates to the Fluent Bit project. View more...Set Up Spring Data Elasticsearch With Basic AuthenticationAggregated on: 2025-10-27 17:10:55 Recently, I wrote the Introduction to Spring Data Elasticsearch 5.5 article about Spring Data Elasticsearch usage as a NoSQL database. The article covered just the setup of the unsecured Elasticsearch. However, we need to be able to connect to the secured Elasticsearch as well. Let's follow the previous article and see the needed changes to run and connect to the secured Elasticsearch. In This Article, You Will Learn How to create a secure Elasticsearch How to connect to the secured Elasticsearch with Spring Data Elasticsearch How to change the password in Elasticsearch Set Up Secured Elasticsearch The setup for creating a secure Elasticsearch is pretty similar to the steps in the already-mentioned article. The technologies used in this article, compliant with the compatibility matrix, are: View more...Using Schema Registry to Manage Real-Time Data Streams in AI PipelinesAggregated on: 2025-10-27 16:10:55 In today’s AI-powered systems, real-time data is essential rather than optional. Real-time data streaming has started having an important impact on modern AI models for applications that need quick decisions. However, as data streams increase in complexity and speed, ensuring data consistency is a significant engineering challenge. As we know, AI models are heavily dependent on the input data used to train them. The quality of this input data is very important and should not be corrupted or contain errors. The accuracy, reliability, and fairness of the model’s predictions can be significantly affected if the quality of the input data is compromised. The above statement is concrete, while AI models are being developed and subsequently made ready to identify patterns, make predictions based on input data. If we integrate these developed and tested trained AI models with real-time data stream processing pipelines, the predictions can be achieved on the fly. Because the real-time data streaming plays a key role for AI models as it allows them to handle and respond to data as it comes in, instead of just using old fixed datasets. You could read here my previous article, “AI on the Fly: Real-Time Data Streaming from Apache Kafka to Live Dashboards.” But the big question is how we can ensure real-time data that comes as a stream from various sources is free from errors and not at all bad data. By spotting patterns and trained data, AI systems decide. If this data has mistakes, doesn’t add up, or is messy, the model might pick up wrong patterns. This can lead to outputs that are biased, off the mark, or even risky. View more...Anthropic’s Model Context Protocol (MCP): A Developer’s Guide to Long-Context LLM IntegrationAggregated on: 2025-10-27 15:10:55 Large language models (LLMs) like Anthropic’s Claude have unlocked massive context windows (up to 100k tokens in Claude 2) that let them consider entire documents or codebases in a single go. However, effectively providing relevant context to these models remains a challenge. Traditionally, developers have resorted to complex prompt engineering or retrieval pipelines to feed external information into an LLM’s prompt. Anthropic’s Model Context Protocol (MCP) is a new open standard designed to simplify and standardize this process. Think of MCP as the “USB-C for AI applications” — a universal connector that lets your LLM seamlessly access external data, tools, and systems. In this article, we’ll explain what MCP is, why it’s important for long-context LLMs, how it compares to traditional prompt engineering, and walk through building a simple MCP-compatible context server in Python. We’ll also discuss practical use cases (like retrieval-augmented generation and agent tools) and provide code examples, diagrams, and references to get you started with MCP and Claude. What is MCP and why does it matter? MCP (Model Context Protocol) is an open protocol introduced by Anthropic in late 2024 to standardize how AI applications provide context to LLMs. In essence, MCP defines a common client–server architecture for connecting AI assistants to the places where your data lives — whether that’s local files, databases, cloud services, or business applications. Before MCP, integrating an LLM with each new data source or API meant writing a custom connector or prompt logic for that specific case. This led to a combinatorial explosion of integrations: M AI applications times N data sources could require M×N bespoke implementations. MCP addresses this by providing a universal interface so that any compliant AI client can communicate with any compliant data/service server, reducing the problem to M + N integration points. View more...Series: Toward a Shared Language Between Humans and Machines Part 1/4: Why Machines Still Struggle to Understand UsAggregated on: 2025-10-27 14:10:55 Language models give the impression of conversing with us as if they really understood. But behind this fluency lies an illusion: machines share neither our experiences nor our intentions. This article explores the fundamental barriers that prevent any genuine mutual understanding: the absence of lived experience, the absence of a world, and the radical difference in how reasoning works. Anyone who has ever translated between two human languages can’t help but notice that the task is quite complex, even when mastering both languages perfectly. Language holds many subtleties and ambiguities, unspoken meanings, and things that are simply untranslatable from one language to another. These difficulties often have their roots in cultural grounding as well as in lived experience, frames of thought that shape languages. View more...Enterprise-Grade Document Intelligence: Cloud Big Data AI With YOLOv9 and Spark on AWSAggregated on: 2025-10-27 13:10:55 Introduction Document analysis, a modern way: Managing considerable volumes of documents, including checks, ID cards, and tax forms, etc, is an error-prone, tedious, and time-consuming endeavor for financial institutions and enterprises. The standard approach usually employs people and/or older, often less accurate, Optical Character Recognition (OCR) technology to try to manage the variable layouts in documents, the variability in handwriting, and issues with image quality. View more...Engineering Performance: Technical Analysis of telecom-mas-agent vs Google Cloud Pub/Sub in High-Throughput Telecom AutomationAggregated on: 2025-10-27 12:10:55 Overview When building telecom automation systems that process millions of messages daily, every millisecond and megabyte matters. After eighteen months of running production workloads and experiencing recurring performance bottlenecks with Google's enterprise-grade solutions, I embarked on a systematic engineering analysis to quantify the true performance characteristics of telecom automation tools. This investigation compares telecom-mas-agent (@npm-telecom-mas-agent)against Google Cloud Pub/Sub (@google-cloud/pubsub) across multiple dimensions: memory management, network optimization, error handling resilience, and computational efficiency. The findings reveal fundamental architectural differences that create measurable performance gaps when competing directly with Google's flagship messaging infrastructure. View more...Renaming Columns in PySpark: withColumnRenamed vs toDFAggregated on: 2025-10-27 11:10:55 If you’ve worked with PySpark DataFrames, you’ve probably had to rename columns. Either using withColumnRenamed repeatedly or toDF(). At first glance, both approaches work the same; you get the renamed columns you wanted. But under the hood, they interact with Spark’s Directed Acyclic Graph (DAG) in very different ways. withColumnRenamed creates a new projection layer for each rename, gradually stacking transformations in the logical plan. toDF(), on the other hand, applies all renames in a single step. While both are optimized to the same physical execution, their impact on the DAG size, planning overhead, and code readability can make a real difference in larger pipelines. View more...Kubernetes Debugging Recipe: Practical Steps to Diagnose Pods Like a ProAggregated on: 2025-10-24 19:25:53 Automation isn’t optional at enterprise scale. It’s resilient by design. Kubernetes provides remarkable scalability and resilience , but when pods crash, even seasoned engineers struggle to translate complex and cryptic logs and events. This guide walks you through the spectrum of AI-powered root cause analysis and manual debugging, combining command-line reproducibility and predictive observability approaches. View more...From Distributed Monolith to Composable Architecture on AWSAggregated on: 2025-10-24 18:25:53 You adopted microservices for independence and agility. Instead, every deployment requires coordinating multiple teams and testing the entire system. What you built is a distributed monolith, complexity spread across systems, but still bound by monolithic coupling. The shift from technical boundaries to business-driven boundaries is the only path to true agility. Many organizations discover too late that microservices alone do not guarantee independence. Domain-Driven Composable Architecture (DDCA) provides a methodology to escape this rigidity. This article is a practical playbook for decomposing services into Packaged Business Capabilities (PBCs) aligned with business domains and mapped to AWS patterns such as EventBridge, Step Functions, and DynamoDB Streams. It explains when DDCA fits and when it does not, and covers security, anti-patterns, and operational realities, so you can adopt composability with a clear view of the investment required. View more...Unhandled Promise Rejections: The Tiny Mistake That Crashed Our Node.js AppAggregated on: 2025-10-24 17:25:53 Imagine deploying a Node.js backend service that works flawlessly in development, only to have it mysteriously crash in production. Everything ran fine on your laptop, but on the live server, the process keeps shutting down unexpectedly. In our case, the culprit was a single unhandled promise rejection — one missing .catch() in our code caused Node to exit abruptly whenever an error occurred. That one “tiny” mistake made the difference between a stable service and frequent downtime. In this article, we’ll explore how a misconfigured error handling in a Node/Express API can bring down an application, and how to diagnose and fix it to prevent future crashes. View more...Performance Testing 101: A Beginner's Guide to Building Robust ApplicationsAggregated on: 2025-10-24 16:25:53 Welcome! This guide is for anyone who has built an application and wants to ensure it doesn't fall over when real people start using it. We'll walk through the essentials of performance testing without the complicated jargon, focusing on practical steps you can take to make your app robust and reliable. You may find this article too abstract, but be sure the next time we will go over the real example of making a performance test with Java, Gatling, and Docker Compose. View more...Strategic Domain-Driven Design: The Forgotten Foundation of Great SoftwareAggregated on: 2025-10-24 15:25:53 When teams talk about domain-driven design (DDD), the conversation often jumps straight to code — entities, value objects, and aggregates. Yet, this is where most projects begin to lose direction. The essence of DDD is not in the tactical implementation, but in its strategic foundation — the part that defines why and where we apply the patterns in the first place. The strategic aspect of DDD is often overlooked because many people do not recognize its importance. This is a significant mistake when applying DDD. Strategic design provides context for the model, establishes clear boundaries, and fosters a shared understanding between business and technology. Without this foundation, developers may focus on modeling data rather than behavior, create isolated microservices that do not represent the domain accurately, or implement design patterns without a clear purpose. View more...Build a Dynamic Web Form Using Camunda BPMN and DMNAggregated on: 2025-10-24 14:25:53 Business Process Model and Notation (BPMN) is the universal standard for visually modeling and automating business processes. It is used to design and automate workflows, defining the sequence of tasks, approvals, and user interactions. Whereas Decision Model and Notation (DMN) models the complex decision logic that can be embedded within those processes to automate business rules in a structured, reusable way. Camunda's process orchestration platform provides a collaborative environment for Business and IT developers via an intuitive visual Modeler that adheres to BPMN and DMN standards. Modeling with Camunda reduces the time it takes to develop and maintain real-world business processes through automation. Beyond automation, combining BPMN and DMN allows us to create dynamic web forms where the fields, validations, and even flow of the form adapt real-time business rules, instead of being hardcoded. This makes applications more flexible, easier to maintain, and business-driven. View more...Cloud Agnostic MLOps: How to Build and Deploy AI Models Across Azure, AWS, and Open SourceAggregated on: 2025-10-24 13:25:53 Artificial intelligence has become the centerpiece of every digital strategy. What began as isolated proof-of-concepts running on data scientists’ laptops is now expected to scale across clouds, business units, and continents. Enterprises quickly discover that the challenge is not building AI models. It’s operationalizing them sustainably. View more...Diagnosing and Fixing a Page Fault Performance Issue With Arm64 AtomicsAggregated on: 2025-10-24 12:25:53 While running a synthetic benchmark that pre-warmed the cache, we noticed an abnormal performance impact on Ampere CPUs. Digging deeper, we found that there were many more page faults happening with Ampere CPUs when compared to x86 CPUs. We isolated the issue to the use of certain atomic instructions like ldadd, which load a register, add a value to it, and store data in a register in a single instruction. This triggered two “page faults” under certain conditions, even though this is logically an all-or-nothing operation, which is guaranteed to be completed in one step. In this article, we will summarize how to qualify this kind of problem, how memory management in Linux works in general, explain how an atomic Arm64 instruction can generate multiple page faults, and show how to avoid performance slowdowns related to this behavior. View more...I Built a Full Stack App Using Only Vibe Coding Prompts: Here’s What HappenedAggregated on: 2025-10-24 11:25:53 You know that moment when you stare at your screen and think, “What if I just let the vibes lead?” That’s exactly what I did. I decided to build a full-stack app, not with a strict roadmap, not with a pre-decided stack, and not even with a design in Figma, but by coding my way through it by vibe. I used AI tools, intuition, and years of muscle memory to go with the flow. No formal planning, no architecture diagrams, no syntax lookups, just prompts, patterns, and pure gut feel. View more...Evolving Golden Paths: Upgrades Without DisruptionAggregated on: 2025-10-23 19:10:53 The platform team had done it again — a new version of the golden path was ready. Cleaner templates, better guardrails, smoother CI/CD. But as soon as it rolled out, messages started flooding in: “My pipeline broke!”, “The new module isn’t compatible with our setup!” Sound familiar? Every platform engineer knows that delicate balance — driving innovation while ensuring developer stability. Golden paths promise simplicity and speed, but without careful version management, they can easily turn from enablers into disruptors. View more...AI Won't Replace Front-End Developers, It'll Replace the Boring PartsAggregated on: 2025-10-23 18:10:53 I’ve been working as a front-end engineer at a service-based company. When I joined as an intern last year, I came across the term "pixel-perfect" for the first time. During my internship, I was assigned a feature where I had to display pages on the screen, with the content coming from an MDX file. I built the entire page from scratch, and when it was time for review, the reviewer turned out to be our Chief Design Officer (CDO). View more...Data Quality at Write Time: Engineering Reliability With Delta ExpectationsAggregated on: 2025-10-23 17:10:53 Data quality failures don't announce themselves. They compound silently — a malformed timestamp here, a negative revenue figure there — until a quarterly board deck shows impossible numbers or an ML model degrades into uselessness. A 2023 Gartner study pegged the cost at $12.9 million annually per organization, but that figure misses the hidden expense: engineering time spent firefighting data incidents instead of building features. The traditional approach treats validation as a post-processing step. You write data to storage, then run Great Expectations or Deequ checks, discover failures, and either fix the pipeline or quarantine bad records. This pattern creates a fundamental gap: the window between data landing and validation completion. In high-throughput lakehouses processing terabytes daily, that window can represent millions of corrupted records propagating downstream before anyone notices. View more...LangGraph Beginner to Advanced: Part 2 — Hello World Graph in LangGraphAggregated on: 2025-10-23 16:10:53 Awesome. Now this is quite exciting. We’re actually about to start coding in LangGraph for the very first time. Now that we’ve covered all the theory, admittedly the boring section, we’re now going to actually code up some graphs, and we’re about to code up our very first graph in this subsection. But in this section, we’re not going to be building any AI agents. Why? Since we haven’t really learned how to code in LangGraph yet, or how to combine all these LLM APIs and tools, I thought it would get pretty messy to build one right now. View more...Series: Toward a Shared Language Between Humans and MachinesAggregated on: 2025-10-23 15:10:53 Large language models (LLMs) today produce fluent and coherent texts, to the point of giving the illusion of a real conversation. But behind this apparent mastery arises an interesting question: do machines really understand us, or are they only predicting words? In a four-part series, we will explore one of the deepest challenges of artificial intelligence: building a true common ground between human meaning and machine logic. From cognitive limits to quantum horizons, and including the strategic role of humans, each article will shed light on one facet of these questions. View more...Ranking Full-Text Search Results in PostgreSQL Using ts_rank and ts_rank_cd With Hibernate 6 and posjsonhelperAggregated on: 2025-10-23 14:10:53 In a previous article, we explored how to implement full-text search in PostgreSQL using Hibernate 6 and the posjsonhelper library. We built queries with to_tsvector, to_tsquery, and their simpler wrappers for the plainto_tsquery, phraseto_tsquery, and websearch_to_tsquery functions. This time, we’ll extend that foundation and explore how to rank search results based on their relevance using PostgreSQL’s built-in ranking functions like ts_rank and ts_rank_cd. View more...Applying Domain-Driven Design With Enterprise Java: A Behavior-Driven ApproachAggregated on: 2025-10-23 13:10:53 When it comes to software development, one of the biggest mistakes is delivering precisely what the client wants. While this may sound cliché, the problem persists even after decades in the industry. A more effective approach is to begin testing with a focus on business needs. Behavior-driven development (BDD) is a software development methodology that emphasizes behavior and domain terminology, also known as ubiquitous language. It uses a shared, natural language to define and test software behaviors from the user's perspective. BDD builds on test-driven development (TDD) by concentrating on scenarios that are relevant to the business. These scenarios are written as plain-language specifications that can be automated into tests, which also serve as living documentation. View more...Fundamentals of Logic Hallucinations in AI-Generated CodeAggregated on: 2025-10-23 12:10:53 Tools like GitHub Copilot, ChatGPT, Cursor, and other AI coding assistants can generate boilerplate, suggest algorithms, and even create full test suites within seconds. This accelerates development cycles and reduces repetitive coding work. Hallucinations, however, are a common problem of AI-generated code. There are several types of hallucinations, and in this article, I will focus on some basic logical hallucinations. View more...How to Build an MCP Server and Client With Spring AI MCPAggregated on: 2025-10-23 11:10:53 If it’s spring, it’s usually conference time in Bucharest, Romania. This year was, as always, full of great speakers and talks. Nevertheless, Stephan Janssen’s one, where the audience met the Devoxx Genie IntelliJ plugin he has been developing, was by far my favorite. The reason I mention it is that during his presentation, I heard about Anthropic’s Model Context Protocol (MCP) for the first time. Quite late though, considering it was released last year in November. Anyway, to me, the intent of standardizing how additional context could be brought into AI applications to enrich and enhance their accuracy was basically what’s been missing from the picture. With this aspect in mind, I have been motivated enough to start studying about MCP and to experiment with how its capabilities can improve AI applications. In this direction, from high-level concepts to its practical use when integrated in AI applications, MCP has really caught my attention. The result: a series of articles. View more...Mastering Audio Transcription With Gemini APIs: A Developer's GuideAggregated on: 2025-10-22 19:25:52 Understanding Audio Transcription via Gemini APIs Gemini models are multimodal large language models. They can process and generate various types of data, including text, code, images, audio, and video. Gemini models also offer powerful audio transcription capabilities, enabling developers to convert spoken content into text. This can help in building a transcription service, creating subtitles for videos, and developing voice-enabled applications. If you are looking to convert speech to text using Gemini's powerful AI models, this comprehensive guide will show you how to implement audio transcription using different Gemini APIs. We will go from basic implementation to advanced real-time streaming. Gemini supports the following audio formats as input: WAV, MP3, AIFF, AAC, OGG, and FLAC. We will look at generateContent, streamGenerateContent, and BidiGenerateContent(LiveAPI) APIs. You can find all supported APIs at https://ai.google.dev/api. generateContent is a standard REST endpoint, which processes the request and returns a single response. streamGenerateContent uses SSE (server-sent-events) to send partial responses as they are generated. This API is a better choice for applications like chatbots, which need a faster and more interactive experience. View more... |
|
|