swampUP Recap: “EveryOps” is Trending as a Software Development Requirement

swampUP 2024, the annual JFrog DevOps Conference, was unique in it’s addressing not only more familiar DevOps and DevSecOps issues, but adding specific operational challenges, stemming from the explosive growth of GenAI and the resulting need for specialized capabilities for handling AI models and datasets, while supporting new personae such as AI/ML engineers, data scientists and MLOps professionals.

Instead of adding yet another “Ops” to the ever growing list of operational skill sets required to manage the software supply chain, JFrog suggests the all encompassing term “EveryOps” to include DevOps, DevSecOps, MLOps and whatever additional Ops are on the way. This theme was highlighted in JFrog Founder & CEO, Shlomi Ben Haim’s keynote address and broken down in his “From Coding to Cognition: Developers’ EveryOps Evolution” breakout session.

JFrog CEO, Shlomi Ben Haim, presents the challenges facing DevOps teams in the near term

The well attended breakout sessions reflected the “EveryOps” approach and reinforced the need for a single platform that could unify best of breed solutions to cover any requirement regardless if it is related to operations, security or AI applications. Below are summaries of select swampUP sessions that demonstrate the flexibility of the JFrog Platform in providing the features to support EveryOps.

DevOps

If we had to combine every DevOps session into one and give it a new title it would probably be something along the lines of:  “Going large in the cloud – optimizing infrastructure for enterprise success”. This year’s speakers gave best practices for ways to scale your DevOps infrastructure with JFrog Artifactory and the JFrog Platform at the core.

In “Deciphering Artifactory Infrastructure: Optimizing Engineering Dependencies at Credit Karma” Amit Mishra left attendees with practical tips and techniques to scale their development infrastructure with recommendations for scaling applications, building resilience across the database, application/cluster backups, and key observability metrics to track.

Aiden Jang gave practical considerations for determining if going to the cloud is right for you in his session, “Teledyne Controls’ Cloud Journey: Migrating from Self-Hosted to JFrog SaaS.” Aiden brought practical considerations to go beyond just the dollars and cents calculations – such as the benefit of always having the best and latest features available without having to manage upgrades yourself.

JFrog’s own Marcus Hogue led a packed room through “Mastering the Art of Scaling: Best Practices for Tomorrow’s Workloads,” where he went through how to think about sizing JFrog deployments for a variety of use cases including Disaster Recovery,  Global Development, supporting AI development, and more.

One thing is for sure, even with the rise of other “Ops” disciplines, the role of DevOps isn’t going anywhere. In fact one could argue it’s more important than ever.

DevSecOps

On the security front, the focus was on two key sessions: runtime security and policies at scale, both led by JFrog’s Harel Avissar and Shimi Bandiel.

Harel kicked off with “Full Spectrum Security: Discover the Impact of Runtime Discoveries Across Your Pipeline,” demonstrating the value of real-time visibility, integrity validation, traceability, and risk prioritization through live demos of real-world use cases, emphasizing the importance of extending security coverage to production environments while balancing the need for both security and speed.

Shimi followed with “Mastering Supply Chain Security: Continuous Delivery is Essential”, sharing insights and best practices for the complexities of scaling security policies in large enterprises, and the importance of compliance, effective CVE management, and structured workflows to streamline operations.

One thing is for sure, security is a critical component of the software development lifecycle and should be implemented and maintained at every stage. Ensuring real-time visibility and comprehensive protection from security threats, attacks, and compliance risks across the entire pipeline, from development to production, is more essential than ever.

MLOps

With AI/ML taking the world by storm, the demand for AI applications has skyrocketed presenting formidable challenges for operations to handle ML models, large datasets and new participants in the development process such as AI engineers and data scientists. In response to AI/ML specific challenges, these sessions focused on bridging the gap between ML development and production.

The inclusion of ML in software development is rapidly increasing, but deploying these models effectively in production environments poses significant challenges. A striking 85% of AI and ML applications fail to reach production. This high failure rate stems from the distinct life cycle of ML development, which necessitates specialized tools and infrastructure.

Designed to address these challenges, JFrog ML provides a comprehensive MLOps solution by integrating with the existing JFrog ecosystem, JFrog ML streamlines the entire ML lifecycle, from development to deployment.

Yuval Fernbach, VP and CTO of JFrog ML, called out key capabilities of JFrog ML, that reduce the need for specialized tools for infrastructure and collaboration between data scientists and other teams, which  include:

  • Unified Artifact Management: Managing all ML artifacts, including models, datasets, code, binaries, and dependencies, within a single source of truth – JFrog Artifactory.
  • Enhanced Security: Leveraging JFrog’s security tools, such as X-ray, to scan models and dependencies for vulnerabilities, ensuring the integrity of the entire ML pipeline.
  • Streamlined Deployment and Monitoring: Deploying models to production with a single click and monitoring infrastructure, data, and model performance metrics.
  • Enhanced Collaboration: Fostering communication and knowledge sharing between different teams involved in the ML lifecycle.

JFrog ML offers a robust, secure, and collaborative platform to address the challenges of managing ML models in production. By streamlining the entire ML lifecycle, from development to deployment, JFrog ML empowers organizations to unlock the true potential of machine learning and drive innovation.

Adopting a single unified platform breaks down siloes enabling fast, secure and efficient production

These sessions show how the combined features of JFrog Artifactory, the JFrog Security Suite and the recent integrations with GitHub and Qwak provide a full command and control solution for software development operations, security, MLOps and more. To see how the JFrog Platform can give you the EveryOps advantage to your development operations, take an online tour or set up a guided one-on-one demo today.