OpenAI has announced the ability to fine-tune its powerful language models, including both GPT-3.5 Turbo and GPT-4.
The fine-tuning allows developers to tailor the models to their specific use cases and deploy these custom models at scale. This move aims to bridge the gap between AI capabilities and real-world applications, heralding a new era of highly-specialised AI interactions.
With early tests yielding impressive results, a fine-tuned version of GPT-3.5 Turbo has demonstrated the ability to not only match but even surpass the capabilities of the base GPT-4 for certain narrow tasks.
All data sent in and out of the fine-tuning API remains the property of the customer, ensuring that sensitive information remains secure and is not used to train other models.
The deployment of fine-tuning has garnered significant interest from developers and businesses. Since the introduction of GPT-3.5 Turbo, the demand for customising models to create unique user experiences has been on the rise.
Fine-tuning opens up a realm of possibilities across various use cases, including:
Improved steerability: Developers can now fine-tune models to follow instructions more accurately. For instance, a business wanting consistent responses in a particular language can ensure that the model always responds in that language.
Reliable output formatting: Consistent formatting of AI-generated responses is crucial, especially for applications like code completion or composing API calls. Fine-tuning improves the model’s ability to generate properly formatted responses, enhancing the user experience.
Custom tone: Fine-tuning allows businesses to refine the tone of the model’s output to align with their brand’s voice. This ensures a consistent and on-brand communication style.
One significant advantage of fine-tuned GPT-3.5 Turbo is its extended token handling capacity. With the ability to handle 4k tokens – twice the capacity of previous fine-tuned models – developers can streamline their prompt sizes, leading to faster API calls and cost savings.
To achieve optimal results, fine-tuning can be combined with techniques such as prompt engineering, information retrieval, and function calling. OpenAI also plans to introduce support for fine-tuning with function calling and gpt-3.5-turbo-16k in the upcoming months.
The fine-tuning process involves several steps, including data preparation, file upload, creating a fine-tuning job, and using the fine-tuned model in production. OpenAI is working on a user interface to simplify the management of fine-tuning tasks.
The pricing structure for fine-tuning comprises two components: the initial training cost and usage costs.
The introduction of updated GPT-3 models – babbage-002 and davinci-002 – has also been announced, providing replacements for existing models and enabling fine-tuning for further customisation.
These latest announcements underscore OpenAI’s dedication to creating AI solutions that can be tailored to meet the unique needs of businesses and developers.
(Image Credit: Claudia from Pixabay)
See also: ChatGPT’s political bias highlighted in study
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.
Explore other upcoming enterprise technology events and webinars powered by TechForge here. Tags: ai, artificial intelligence, fine tuning, gpt-3, gpt-3.5 turbo, gpt-4, openai
Figma Inc., the startup behind a cloud service used by Microsoft Corp. and Uber Technologies Inc. to design interfaces for their applications, landed a funding round of $ 200 million for a valuation of $ 10 billion.
Figma shared the news with Bloomberg today. According to the publication, the investment included participation from Durable Capital Partners, Morgan Stanley and Sequoia, who also participated in the startup’s previous $ 50 million fundraiser last April.
Between Figma’s stealth launch in December 2015 and its $ 50 million fundraising round last April, the startup has built an installed base of over 4 million users. Its clients include many of the biggest players in the tech industry. Figma’s platform is primarily used by corporate software design teams, which rely on it to create sketches of an interface and then turn them into an interactive prototype that works as if it were of an application.
Figma provides a number of unique features for the sketching and prototyping phase of a project. When creating the initial mockup of an interface, users have access to a technology called vector networks, which Figma presents as an improved version of the digital pen found in most graphic design programs. The startup’s vector networks simplify some common design tasks such as drawing geometric shapes.
For the prototyping phase of projects, Figma offers a feature called Auto Layout. The technology automatically transforms design details, such as the distance between two buttons or their location in the interface, into code, speeding up application projects by reducing the need for developers to perform the task manually. Figma’s platform also automates some of the tasks involved in updating an interface prototype. For example, if a designer drags a list item that was originally in the center of the list up, Figma can automatically rearrange the other entries.
Built on top of Figma’s core design capabilities is a collaborative feature set. Several designers can edit a file at the same time and exchange inputs via a built-in commenting system.
“Our vision is to make design accessible to everyone,” Figma co-founder and CEO Dylan Field wrote in a statement. blog post today. “It means creating expert tools for designers and also taking on all the roles involved in product development. But above all, it’s about opening access and empowering teams to think, feel and work in a design-driven way.
Figma plans to increase its workforce to 500 employees by the end of the year, according to Bloomberg, more than triple the number of employees it had at the start of 2020. The startup also hinted that it was considering to make acquisitions.
Figma has raised over $ 332 million in total funding to date. Its main competitors are Adobe Inc. and InVision Inc., a publicly traded startup that received a valuation of $ 1.9 billion after its last funding round in 2018.
A trial of the English coronavirus app is getting under way.
It will be limited to residents in the Isle of Wight, the London Borough of Newham and NHS volunteer responders to begin with.
The app will be available in Apple and Google’s online stores, but users will need to enter a code to activate it.
The software will tell users to self-isolate for a fortnight if the app detects they have been close to someone else diagnosed with the virus.
Baroness Dido Harding – who heads up the wider Test and Trace initiative – had earlier voiced concern about implementing the automated contact-tracing feature because of fears many people who had been falsely flagged might be told to go into quarantine.
The app has several other functions, including:
An alert system that informs users of the coronavirus risk level close to their home, with the area defined by the first part of their postcode
A QR barcode scanner, so users can check in when they visit a venue and be told if others there later tested positive
A symptom-checking tool, which allows users to book a free test and get the results via the app
A countdown function that comes into effect if they are told to self-isolate, so users can keep track of how long to stay at home
It initially works in five languages, with plans to add more soon.
The contact-tracing element of the software is based on Google and Apple’s privacy-centric system.
The developers acknowledge there are still issues with measuring the distance between handsets, meaning some people will be incorrectly logged as being at high risk.
But when trying to detect this, lab tests indicate:
31% of cases are missed when the handsets were within range
45% of cases are incorrectly flagged when the two handsets were in fact further apart
However, if the boundary is set at 5m, the accuracy rates radically improve.
Then the handsets detect each other in more than 99% of all cases, regardless of whether iPhones or Android devices were involved.
This is not useful in practice, but indicates the flaw that caused the original NHS Covid-19 app to be cancelled has been solved. That product often failed to detect cases involving two iPhones because of restrictions imposed on third-party software by Apple.
The team behind the new app acknowledges more work needs to be done to reduce the number of false positives and false negatives that occur at 2m, but is optimistic they can achieve this.
Part of the problem at present is that Apple and Google refuse to share the raw Bluetooth signal data involved.
While the two show no signs of backing down, they will shortly release a new version of their tool that should improve matters.
This development has also been welcomed by those involved with Switzerland’s SwissCovid app.
“While the updated Google/Apple exposure notification API [application programming interface] still aggregates and shuffles data for privacy reasons, it will expose more information needed by the app to compute exposure more precisely,” explained Prof Mathias Payer from the EPFL university in Lausanne.
Test and Trace officials say the motivation for the app is to give “maximum freedom at minimum risk”, but acknowledge it is not a “silver bullet”.
“By launching an app that supports our integrated localised approach to NHS Test and Trace, anyone with a smartphone will be able to find out if they are at risk of having caught the virus, quickly and easily order a test, and access the right guidance and advice,” said Baroness Harding.
However, she is not yet ready to say when a national rollout could occur.
An academic who had served as an ethical advisor to the original scrapped app was positive about the fact that the trial was not limited to the Isle of Wight this time.
“This time it’s a more diverse area – and not just one full of older white people – because it was clear that before very little could be gained from analysis of the demographics” said Prof Lillian Edwards.
But she added that the government still had a “battle to persuade people” to install the software.
“The evidence from Italy is that people aren’t installing their Immuni contact-tracing app, but they might when the number of infections rises again.”
Another public health expert was even more sceptical.
“Even if they have got it working, the app is unlikely to make a difference,” said Prof Allyson Pollock from Newcastle University.
“The issue is not just the contact tracing but the ability to get people to isolate and quarantine. And that means financial support needs to be provided by the government.”