- Counter Ransomware Initiative Guidance Recommendationsby pasan
Counter Ransomware Initiative has published new guidelines on dealing with ransomware payments to cyber criminals.
More information on the guidance can be found at this link.
- Authy 33 million phone number leakby pasan
Another day, another data leak.
It is really frustrating when companies that are working in or providing products for IT security result in data breaches or security compromises.
This has the impact of companies depending on Authy now needing to be aware of another potential attack vector. In addition, you can never be sure that this was all that way leaked. APTs may have full access to all internal networks at Twilio for all we know.
Many companies that operate with in highly secure information, this is likely to trigger move away from the vendor. Imagine getting thousands of users or more to change their MFA tool.
- Infosecurity Europe 2024by pasan
This week Infosecurity Europe 2024 was held in Excel in east London from 4th to the 6th of June.
Many of the leading vendors in cyber security were at this events with stalls, providing demonstrations and arranging future follow ups. There were also a number of presentations and conferences going in to various aspects of security. Unfortunately I was not able to attend any of them as I wanted to connect with vendors for products that we could use in my workplace.
For those that are interested in discovering and understanding new products to improve the security posture of their organisation or of their clients, this event is highly recommended due to the large number of participants that is present.
- Is AI model collapse inevitable?by pasan
Model collapse, according to Wikipedia refers to refers to “the gradual degradation in the output of a generative artificial intelligence model trained on synthetic data, meaning the outputs of another model (including prior versions of itself)”.
It is an interesting and realistic concept. Generating output based on a model gives the ability to generate near infinite amount of output based on a limited original data sample. At the moment, we are in the outset of Large Language Models (LLMs) churning out new content based on human created data. But as more and more people turn to using these models to generate content, eventually a larger portion of the content out there will be AI generated… and it will all be based on the original content.
AI does not have a way to measure the truth or the value of content. Instead depending on the human generator or a reader caring enough to correct it if necessary. This is unlikely to happen when the content generation keep increasing exponentially as the low barrier to entry bring in people who hope to get some monetary or other benefit off it.
It will certainly be interesting to see how this all works out over the next few decades. The outcome I assign a high probablility is for a clear segragation of tasks that will work with LLMs and the need for different AI models for other tasks. LLMs are fine for things like generating basic communication, summarization and areas where the information is for personal consumption. Generating new information needs models that can follow a process of experimentation or observation through senses that is different from the current basic pattern matching. My belief is, as human conciousness is largely shaped by the 5 senses, we would need to better integrate all these inputs in to models so it can operate and evolve in a sandbox representative of the real world, or a world where we seek understanding of.
My thoughts are based on the following article which descibes the attitudes in the research community on if model collapse is inevitable.
- ChatGPT will be ‘Laughably Bad’ in 12 monthsby pasan
OpenAI COO Brad Lightcap has said that ChatGPT will be ‘Laugably Bad’ in 12 months time, highlighting the speed at which the technology is progressing. He made this comment at the 27th annual Milken Institute Global Conference.
However I can’t help but think that if LLMs are the focus of this ‘progress’, we are unlikely to get closer to real ‘Artificial Intelligence’ in any meaningful way in relation to what we have now.
It feels to me that LLMs are really not the path forward to create AI. Finding words that are close together in a large enough data set and regurtitating in a given context just seems such a fake intelligence. There is really nothing here that points to an intelligence that we normally associate with living things.
However it should also be noted that this is good enough for a lot of tasks. Generalizing across a large enough data set has its uses. The trick is finding the right scenario.
Also it has people hyped (or worried) for progress. Change is a good think in most situations as it drives adaptation and growth. Overall it is likely to be a net positive for humanity.