In September 2022, we welcomed PyTorch to the Linux Foundation from Meta, which formed the PyTorch Foundation with founding members AMD, Amazon Web Services (AWS), Google, Meta, Microsoft, and NVIDIA.
Open Mainframe Project announces Co-Located Events with IBM TechXchange in September and Open Source in Finance Forum in November SAN FRANCISCO, June 7, 2023 – The Open Mainframe Project, an open source initiative that enables collaboration across the mainframe community to develop shared tool sets and resources, today announced the launch of the Call for […]
A recent NY Times article, “In Battle Over A.I., Meta Decides to Give Away Its Crown Jewels,” reported that Meta, — the technology giant that owns Facebook, Instagram, and WhatsApp — made the decision to open source LLaMA, its state-of-the-art large language model (LLM), making the code available to academics, government researchers and others once they’d been properly vetted. Meta’s actions are quite different from those of Google and OpenAI, its chief AI rivals, who’ve been keeping the software underpinning their AI systems proprietary.
“Driven by its founder and chief executive, Mark Zuckerberg, Meta believes that the smartest thing to do is to share its underlying A.I. engines as a way to spread its influence and ultimately move faster toward the future,” said the article. Meta’s chief AI scientist, NYU professor Yann LeCun noted in a recent interview that the growing secrecy of Google and OpenAI is a “huge mistake,” arguing that consumers and governments will refuse to embrace AI if it’s under the control of a couple of powerful American companies. LeCun, along with Geoffrey Hinton and Yoshua Benjio, received the 2018 Turing Award, — often considered the “Nobel Prize of Computing,” — for their pioneering work on deep learning.
Meta’s open source approach to AI isn’t novel. After all, the infrastructures underlying the Internet and World Wide Web were all built on open source software, as has the widely used Linux operating system. And, in September, 2022, Meta contributed its PyTorch machine learning framework to the Linux Foundation to further accelerate the development and accessibility of the technology.
But, not everyone agrees with Meta’s decision to open source LLaMA. Google, OpenAI, and others have been critical of Meta, saying that an open source approach to such a powerful technology is dangerous. Some have also wondered if a reason for opposing open source AI models is because they could pose a competitive threat to their own companies’ proprietary models, as evidenced by the publication of a leaked Google memo, which succinctly declared “We have no moat and neither does OpenAI.”
“While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly,” said the memo. “Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months. This has profound implications for us.” The memo made a number of additional points, including:
“We have no secret sauce. Our best hope is to learn from and collaborate with what others are doing outside Google. We should prioritize enabling 3P [third party] integrations.”
“People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality. We should consider where our value add really is.”
“Giant models are slowing us down. In the long run, the best models are the ones which can be iterated upon quickly. We should make small variants more than an afterthought, now that we know what is possible in the <20B parameter regime.”
The arguments in the leaked Google memo aren’t new. “The history of technology is littered with battles between open source and proprietary, or closed, systems,” noted the NYT article. “Some hoard the most important tools that are used to build tomorrow’s computing platforms, while others give those tools away. … Many companies have openly shared their A.I. technologies in the past, at the insistence of researchers. But their tactics are changing because of the race around A.I.”
The current discussions on the safety of proprietary versus open source AI models bring back memories of similar debates in the late 1990s regarding the use of open source software in supercomputing systems because of their critical importance to US national security and science and engineering research.
At the time I was the industry co-chair of the President’s Information Technology Advisory Committee (PITAC), where I served alongside our academic co-chair, CMU professor Raj Reddy, and 22 other members split evenly between industry and academia. The use of open source software in supercomputing was still fairly new so in October of 1999 the PITAC convened a Panel on Open Source Software for High End Computing that included experts from universities, federal agencies, national laboratories, and supercomputing vendors.
The Panel held a number of meetings and public hearings over the next year, and released its final report in October of 2000: “Developing Open Source Software to Advance High-End Computing.” I went back and re-read the report as well as the Transmittal Letter to the President. Two paragraphs seem particularly relevant to our current discussions:
Is open source a viable strategy for producing high quality software? “The PITAC believes the open source development model represents a viable strategy for producing high quality software through a mixture of public, private, and academic partnerships. This open source approach permits new software to be openly shared, possibly under certain conditions determined by a licensing agreement, and allows users to modify, study, or augment the software’s functionality, and then redistribute the modified software under similar licensing restrictions. By its very nature, this approach offers government the additional promise of leveraging its software research investments with expertise in academia and the private sector.”
Should open source software be used in the development of highly sensitive systems? “Open source software may offer potential security advantages over the traditional proprietary development model. Specifically, access by developers to source code allows for a thorough examination that decreases the potential for embedded trap doors and/or Trojan horses. In addition, the open source model increases the number of programmers searching for software bugs and subsequently developing fixes, thus reducing potential areas for exploitation by malicious programmers.”
Around the same time, Linux was picking up steam in the commercial marketplace and IBM was seriously considering whether to strongly embrace the fast growing operating system. To help us finalize the decision, we launched two major corporate studies in the second half of 1999, one focused on the use of Linux in scientific and engineering computing, and the second on Linux as a high-volume platform for Internet applications and application development. Toward the end of the year, both studies strongly recommended that IBM should embrace Linux across all its product lines, that we should work closely with the open Linux community as a partner in the development of Linux, and that we should establish an IBM-wide organization to coordinate Linux activities across the company.
The recommendations were accepted, and I was asked to organize and lead the new IBM Linux initiative. On January 10, 2000, we announced IBM’s embrace of Linux across the company. Our announcement got a somewhat mixed reception. Many welcomed our strong support of Linux and open source communities. But in January, 2000 Linux was still not all that well known in the commercial marketplace. A number of our customers were perplexed that a company like IBM was so aggressively supporting an initiative that, in their opinion, was so far removed from the IT mainstream.
Over the next year we spent quite a bit of time explaining why IBM was supporting Linux and open source communities. On February 3, 2000, I gave a keynote presentation at the LinuxWorld Conference in New York, where I said that we did not view Linux as just another operating system any more than we viewed the Internet as just another network when we announced the IBM Internet Division four years earlier. We viewed Linux as part of the long term evolution toward open standards that would help integrate systems, applications and information over the Internet.
The number of open source projects and developers has truly exploded over the past two decades as evidenced by the impressive scope of the Linux Foundation (LF), whose predecessor, the Open Source Development Labs (OSDL), IBM helped organize in 2000 along with HP, Intel, and several other companies.
A recent article in The Economist, “What does a leaked Google memo reveal about the future of AI?,” reminds us that while the Internet and several related technologies run on open-source software, the basic Internet infrastructure continues to support lots and lots of applications, platforms, and tools built on proprietary software. “AI may find a similar balance” between open-source AI models and proprietary models built on top of them based on a company’s proprietary data and algorithms. “Yet even if the memo is partly right, the implication is that access to AI technology will be far more democratised than seemed possible even a year ago. Powerful LLMs can be run on a laptop; anyone who wants to can now fine-tune their own AI.”
“This has both positive and negative implications,” explained The Economist. “On the plus side, it makes monopolistic control of AI by a handful of companies far less likely. It will make access to AI much cheaper, accelerate innovation across the field and make it easier for researchers to analyse the behaviour of AI systems (their access to proprietary models was limited), boosting transparency and safety. But easier access to AI also means bad actors will be able to fine-tune systems for nefarious purposes, such as generating disinformation. It means Western attempts to prevent hostile regimes from gaining access to powerful AI technology will fail. And it makes AI harder to regulate, because the genie is out of the bottle.”
Getting back to my original questions, is it safe to leverage open source software and communities in the development of highly sensitive, powerful AI systems? While entering a new era of computing, it feels like we’re revisiting settled questions about the safety of open source systems, questions that have repeatedly been tested and validated in the intervening decades: the open source development model represents a viable strategy for producing high quality software; and open source software offers potential security advantages over the traditional proprietary development model.
Read the original post at: Read More The post Linux Foundation Research Finds Open Source Crucial to Realizing Full Potential of Microgrids appeared first on Linux.com.
Read the original post at: Read More The post Energy Sector in Midst of Major Transformation, With 76% of Utilities Implementing Digitalization Plans, and 64% Using Open Source to Accelerate Innovation appeared first on Linux.com.
We are just four weeks away from the Embedded Open Source Summit (EOSS) kicking off in Prague, Czech Republic + virtually, and we’d be remiss if we didn’t call attention to all of the reasons to attend this epic event. EOSS is a new umbrella event created to unify some of the most well known conferences for embedded technologies. The event aims to provide a meeting ground for discussions and collaborations across embedded projects and developer communities, and hopefully, spur more innovations within the space.
Read the original post at: Read More The post The Linux Foundation Announces WasmCon Event Focused on WebAssembly Technologies appeared first on Linux.com.
Read the original post at: Read More The post Take the 2023 Hyperledger Brand Survey and shape the future of blockchain technology appeared first on Linux.com.
Open source projects are increasingly vital in advancing digital asset ecosystems. But discovering them and avoiding duplication of effort and resources can sometimes be challenging. To improve project discovery and encourage greater collaboration on open source projects with common goals, the LF has launched two new initiatives, LF Digital Trust and LF Sustainability.
While progress in the digital realm is essential, addressing our planet’s sustainability challenges is equally crucial. The LF Sustainability initiative is LF’s response to this global call to action. It aligns LF-hosted open source projects to 17 individual Sustainable Development Goals (SDGs) set forth by the United Nations and points to a broader open source community-wide initiative on climate action.
In this edition, catch up on the new LF Connectivity project, Open Source Summit North America announcements, the 2023 Tech Talent Report, four new LF Research surveys, LFX Mentorship news, and more.
Read the original post at: Read More The post Linux Foundation & Meta Launch new “LF Connectivity” Project Umbrella to Improve Enhanced Access to Networks appeared first on Linux.com.
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.OkPrivacy policy