Skip to main content

IT News

Tech News : Glassdoor Site Shows Real Users’ Names

By Blog, News No Comments

It’s been reported that Glassdoor (the website that allows current employees to anonymously review their employer) posted users’ real names to their profiles without their consent.

What Is Glassdoor? 

By allowing users to register anonymously, Glassdoor is a website that allows current and former employees to anonymously review their companies and management. Founded in 2007 in Mill Valley, California, the platform is used for obtaining insights into company cultures, salaries, and interview processes. Its aim is to foster workplace transparency, enabling job seekers to make better-informed decisions about their careers by learning from the experiences of others.

Reported 

Unfortunately for Glassdoor, a user’s account (taken from her personal blog) of her recent negative experience with Glassdoor (following her contacting Glassdoor’s customer support ) has been widely reported in the press.

Added Name To Profile 

Following the lady (reportedly named Monica) sending an email to Glassdoor’s customer support that showed her full name in the ‘From’ line, Monica alleges that she then discovered that Glassdoor had updated her profile by adding her real name and location (the name pulled from the email), without her consent.

Users Leaving Glassdoor 

It’s been reported that the experience of Monica, identified as a Midwest-based software professional who joined Glassdoor 10 years ago, has now led to other members leaving the platform over fears they could also be ‘outed’. Not only could this be regarded as a breach of trust of the anonymity and privacy that users signed up with but could also have adverse employment consequences from employer retaliation.

Following reports of Monica’s experience in the media, it’s been reported that another user, identified as Josh Simmons, has also said Glassdoor added information about him to his personal profile, again without his consent.

Had To Delete Account 

It’s been reported that although Glassdoor’s privacy policy states “If we have collected and processed your personal information with your consent, then you can withdraw your consent at any time,”  Monica claims that she was not given this option, that Glassdoor stored her name, and that her only recommended option to remove her details was to delete her account altogether. Deleting also meant deleting her reviews.

Shared With Fishbowl

One of the complications of the case appears to be the fact that Glassdoor was integrated with Fishbowl (an app for work-related discussions), three years ago. This led to:

– Glassdoor now saying that it “may update your Profile with information we obtain from third parties. We may also use personal data you provide to us via your resume(s) or our other services.” 

– Glassdoor staff reportedly consult publicly-available sources of information to verify information that is then used to update users’ Glassdoor accounts, in order to improve the accuracy of information for Fishbowl users.

– Glassdoor updating users’ profiles without notifying the user, e.g. if inaccuracies are found, because of its commitment to keeping Fishbowl’s information accurate.

What Does Glassdoor Say? 

Glassdoor has issued a statement saying: “Glassdoor is committed to providing a platform for people to share their opinions and experiences about their jobs and companies, anonymously – without fear of intimidation or retaliation. User reviews on Glassdoor have always and will always be anonymous.” 

What Does This Mean For Your Business? 

A large part of the value of Glassdoor is the fact that users are willing to share their ‘honest’ views about their employers and managers. One of the key reasons they feel able to do so is the anonymity that they had during registration and the assumption that this would remain and that their privacy would be protected. However, if reports are to be believed, integration with and cross-pollination between Fishbowl and Glassdoor has led to policy changes and a new approach whereby a user’s details can be updated, allegedly without consent, and obtained from other sources thereby potentially meaning that users could be unmasked to employers.

The widely publicised stories of this allegedly happening appear likely to have damaged a key source of Glassdoor’s value – the trust that users have that their anonymity will be protected. This may explain why users are reportedly leaving the platform. This story illustrates how important matters of data protection are to businesses and individuals, particularly around privacy and consent, plus how risks can increase for users if aspects of data protection are damaged and changed.

The consequences of putting users in what could be described as a difficult and risky position could potentially be severe and/or long-lasting damage for Glassdoor’s business and reputation.

Tech News : Your AI Twin Might Save Your Life

By Blog, News No Comments

A new study published in The Lancet shows how an AI tool called Foresight (which fully analyses patient health records and makes digital twins of patients) could be used to predict the future of your health.

What Is Foresight?

The Foresight tool is described by the researchers as a “generative transformer in temporal modelling of patient data, integrating both free text and structured formats.” In other words, it’s a sophisticated AI system that’s designed to analyse patient health records over time.

What Does It All Mean? 

The “generative transformer” type of AI is a machine learning / large language model (an ‘LLM’) that can generate new data based on what it has learned from previous data. The term “transformer” is a specific kind of model that’s very good at handling sequences of data, like sentences in a paragraph or a series of patient health records over time (temporal), i.e. a patient’s electronic health records (EHR).

Unlike other health prediction models, Foresight can use a much wider range of data in different formats. For example, Foresight can use everything from medical history, diagnosis, treatment plans, and outcomes, in both free text formats like (unorganised) doctors’ notes or radiology reports and more structured formats. These can include database entries or spreadsheets (with specific fields for patient age, diagnosis codes, or treatment dates).

 Why? 

The researchers say that the study is aimed to evaluate how effective Foresight is in the modelling of patient data and using it to predict a diverse array of future medical outcomes, such as disorders, substances (such as to do with medicines, allergies, or poisonings), procedures, and findings (including relating to observations, judgements, or assessments).

The Foresight Difference 

The researchers say that the difference between Foresight and existing approaches to model a patient’s health trajectory focus mostly on structured data and a subset of single-domain outcomes is that Foresight can take a lot more diverse types and formats of data into account.

Also, being an AI model, Foresight can easily scale to more patients, hospitals, or disorders with minimal or no modifications, and like other AI models that ‘learn,’ the more data it receives, the better it gets at using that data.

How Does It Work? (The Method) 

The method tested in a recent study involved Foresight working in several steps. In the research, the Foresight AI tool was tested across three different hospitals, covering both physical and mental health, and five clinicians performed an independent test by simulating patients and outcomes.

In the multistage process, the researchers trained the AI models on medical records and then fed Foresight new healthcare data to create virtual duplicates of patients, i.e. ‘digital twins’. The digital twins of patients could then be used to forecast different outcomes relating to their possible/likely disease development and medication needs, i.e. educated guesses were produced about any future health issues, like illnesses or treatments that might occur for a patient.

The Findings 

The main findings of the research were that the Foresight AI tool and the use of digital twins can be used for real-world risk forecasting, virtual trials, and clinical research to study the progression of disorders, to simulate interventions and counterfactuals, and for educational purposes. The researchers said that using this method, they demonstrated that Foresight can forecast multiple concepts into the future and generate whole patient timelines given just a short prompt.

What Does This Mean For Your Business? 

Using an AI tool that can take account of a wider range of patient health data than other methods, make a digital twin, produce simulations, and forecast possible health issues and treatments in the future, i.e. whole patient timelines until death could have many advantages. For example, as noted by the researchers, it could help medical students to engage in interactive learning experiences by simulating medical case studies. This could help them to practice clinical reasoning and decision-making in a safe environment, as well as helping them with ethical training by facilitating discussions on fairness and bias in medicine.

This kind of AI medical prediction-making could also be useful in helping doctors to alert patients to tests they may need to take to enable better disease-prevention as well as helping with issues such as medical resource planning.  However, as many AI companies say, feeding personal and private details (medical records) into AI is not without risk in terms of privacy and data protection. Also, the researchers noted that more tests are needed to validate and test the performance of the model on long simulations. One other important point to remember is that regardless of current testing of the model, Foresight is currently predicting things long into the future for patients and, as such, it’s not yet known how accurate its predictions are.

Following more testing (as long as issues like security, consent, and privacy are adequately addressed) a fully developed method of AI-based health issue prediction could prove to be very valuable to medical professionals and patients and could create new opportunities in areas and sectors related to health, such as fitness, wellbeing,  pharmaceuticals, insurance, and many more.

An Apple Byte : Serious Apple Chip Vulnerability Discovered

By Blog, News No Comments

US researchers have reported discovering a hardware chip vulnerability inside Apple M1, M2, and M3 silicon chips. The unpatchable ‘GoFetch’ is a microarchitecture vulnerability and side-channel attack that reportedly affects all kinds of encryption algorithms, even the 2,048-bit keys that are hardened to protect against attacks from quantum computers.

This serious vulnerability renders the security effects of constant-time programming (a side-channel mitigation encryption algorithm) useless. This means that encryption software can be tricked by applications using GoFetch into putting sensitive data into the cache so it can be stolen.

Pending any fix advice from Apple, users are recommended to use the latest versions of software, and to perform updates regularly. Also, developers of cryptographic libraries should set the DOIT bit and DIT bit bits (disabling the DMP on some CPUs) and to use input blinding (cryptography). Users are also recommended to avoid hardware sharing to help maintain the security of cryptographic protocols.

Security Stop Press : Microsoft’s RSA Key Policy Change

By Blog, News No Comments

Microsoft is making a security-focused policy change that will see RSA keys with lengths shorter than 2048 bits deprecated. RSA keys are algorithms used for secure data encryption and decryption in digital communications, i.e. to encrypt data for secure communications over an enterprise network.

However, with RSA encryption keys becoming vulnerable to advancing cryptographic techniques (driven by advancements in compute power) the decision by Microsoft to depreciate them is being seen as a way to stop organisations from using what is now seen as a weaker method of authentication.

Also, the move by Microsoft will help bring the industry in line with recommendations from the internet standards and regulatory bodies who banned the use of 1024-bit keys in 2013 and recommended that RSA keys should have a key length of 2048 bits or longer.

Sustainability-in-Tech : World’s First Bio-Circular Data Centre

By Blog, News No Comments

French data centre company, Data4, says its new project will create a world-first way of reusing data centre heat and captured CO2 to grow algae which can then be used to power other data centres and create bioproducts.

Why? 

The R&D project, involving Data4 working with the University of Paris-Saclay, is an attempt to tackle the strategic challenge of how best to reuse and not to waste / lose the large amount of heat produced by data centres. For example, even the better schemes which use it to heat nearby homes only manage to exploit 20 per cent of the heat produced

Also, the growth of digital technology and the IoT, AI, and the amount of data stored in data centres (+35 per cent / year worldwide), mean that those in the data centre industry must up their game to reduce their carbon footprint and meet environmental targets.

Re-Using Heat To Grow Algae 

Data4’s project seeks to reuse the excess data centre heat productively in a novel new way. Data4’s plan is to use the heat to help reproduce a natural photosynthesis mechanism by using some of the captured CO2 to grow algae. This Algae can then be recycled as biomass to develop new sources of circular energy and reusing it in the manufacture of bioproducts for other industries (cosmetics, agri-food, etc.).

Super-Efficient 

Patrick Duvaut, Vice-President of the Université Paris-Saclay and President of the Fondation Paris-Saclay has highlighted how a feasibility study of this new idea has shown that the efficiency of this carbon capture “can be 20 times greater than that of a tree (for an equivalent surface area)” 

Meets Two Major Challenges 

Linda Lescuyer, Innovation Manager at Data4, has highlighted how using the data centre heat in this unique way means: “This augmented biomass project meets two of the major challenges of our time: food security and the energy transition.” 

How Much? 

The project has been estimated to cost around €5 million ($5.4 million), and Data4’s partnership with the university for the project is expected to run for 4 years. Data4 says it hopes to have a first prototype to show in the next 24 months.

What Does This Mean For Your Organisation? 

Whereas other plans for tackling the challenges of how best to deal with the excess heat from data centres have involved more singular visions such as simply using the heat in nearby homes or to experiment with better ways of cooling servers, Data4’s project offers a more unique, multi-benefit, circular perspective. The fact that it not only utilises the heat grow algae, but that the algae makes a biomass that can be used to solve 2 major world issues in a sustainable way – food security and the energy transition – makes it particularly promising. Also, this method offers additional spin-off benefits for other industries e.g., through manufacturing bioproducts for other industries. It can also help national economies where its operated and help and the environment by creating local employment, and by helping to develop the circular economy. Data4’s revolutionary industrial ecology project, therefore, looks as though it has the potential to offer a win/win for many different stakeholders, although there will be a two-year wait for a prototype.

Tech Tip – Use Task Scheduler to Automate Tasks in Windows

By Blog, News No Comments

Automating routine tasks can save time and ensure that critical operations aren’t overlooked. The Windows Task Scheduler allows you to automate tasks such as daily backups, weekly disk cleanups, off-hours software updates, periodic service restarts, and sending reminder emails for events by setting them to occur at specific times or when certain events happen. Here’s how to use Task Scheduler:

– Search for Task Scheduler in the Windows search bar and open it.

– To create a new task, click on Create Basic Task or Create Task for more detailed options.

– Follow the wizard to specify when the task should run and what action it should perform, such as launching a program, sending an email, or displaying a message.

– After setting up your task, it will run automatically according to your specified schedule or event trigger.

Featured Article : Don’t Ask Gemini About The Election

By Blog, News No Comments

Google has outlined how it will restrict the kinds of election-related questions that its Gemini AI chatbot will return responses to.

Why? 

With 2024 being an election year for at least 64 countries (including the US, UK, India, and South Africa) the risk of AI being misused to spread misinformation has grown dramatically. This problem extends to a lack of trust by various countries’ governments (e.g. India) around AI’s reliability being taken seriously. There are also worries about how AI could be abused by adversaries of the country holding the election, e.g. to influence the outcome.

Recently, for example, Google’s AI made the news for when its text-to-image AI tool was overly ‘woke’ and had to be paused and corrected following “inaccuracies.” For example, when Google Gemini was asked to generate images of the Founding Fathers of the US, it returned images of a black George Washington. Also, in another reported test, when asked to generate images of a 1943 German (Nazi) soldier, Google’s Gemini image generator returned pictures of people of clearly diverse nationalities (a black and an Asian woman) in Nazi uniforms.

Google also says that its restrictions of election-related responses are being used out of caution and as part of the company’s commitment to supporting the election process by “surfacing high-quality information to voters, safeguarding our platforms from abuse, and helping people navigate AI-generated content.” 

What Happens If You Ask The ‘Wrong’ Question? 

It’s been reported that Gemini is already refusing to answer questions about the US presidential election, where President Joe Biden and Donald Trump are the two contenders. If, for example, users ask Gemini a question that falls into its election-related restricted category, it’s been reported that they can expect Gemini’s response to go along the lines of: “I’m still learning how to answer this question. In the meantime, try Google Search.” 

India 

With India being the world’s largest democracy (about to undertake the world’s biggest election involving 970 million voters, taking 44 days), it’s not surprising that Google has addressed India’s AI concerns specifically in a recent blog post. Google says: “With millions of eligible voters in India heading to the polls for the General Election in the coming months, Google is committed to supporting the election process by surfacing high-quality information to voters, safeguarding our platforms from abuse and helping people navigate AI-generated content.” 

With its election due to start in April, the Indian government has already expressed its concerns and doubts about AI and has asked tech companies to seek its approval first before launching “unreliable” or “under-tested” generative AI models or tools. It has also warned tech companies that their AI products shouldn’t generate responses that could “threaten the integrity of the electoral process.” 

OpenAI Meeting 

It’s also been reported that representatives from ChatGPT’s developers, OpenAI, met with officials from the Election Commission of India (ECI) last month to look at how OpenAI’s ChatGPT tool could be used safely in the election.

OpenAI advisor and former India head at ‘X’/Twitter, Rishi Jaitly, is quoted from an email to the ECI (made public) as saying: “It goes without saying that we [OpenAI] want to ensure our platforms are not misused in the coming general elections”. 

Could Be Stifling 

However, Critics in India have said that clamping down too much on AI in this way could actually stifle innovation and could lead to the industry being suffocated by over-regulation.

Protection 

Google has highlighted a number of measures that it will be using to keep its products safe from abuse and thereby protect the integrity of elections. Measures it says it will be taking include enforcing its policies and using AI models to fight abuse at scale, enforcing policies and restrictions around who can run election-related advertising on its platforms, and working with the wider ecosystem on countering misinformation. This will include measures such as working with Shakti, India Election Fact-Checking Collective, a consortium of news publishers and fact-checkers in India.

What Does This Mean For Your Business? 

The combination of rapidly advancing and widely available generative AI tools, popular social media channels and paid online advertising look very likely to pose considerable challenges to the integrity of the large number of global elections this year.

Most notably, with India about to host the world’s largest election, the government there has been clear about its fears over the possible negative influence of AI, e.g. through convincing deepfakes designed to spread misinformation, or AI simply proving to be inaccurate and/or making it much easier for bad actors to exert an influence.

The Indian government has even met with OpenAI to seek reassurance and help. The AI companies such as Google (particularly since its embarrassment over its recent ‘woke’ inaccuracies, and perhaps after witnessing the accusations against Facebook after the last US election and UK Brexit vote), are very keen to protect their reputations and show what measures they’ll be taking to stop their AI and other products from being misused with potentially serious results.

Although governments’ fears about AI deepfake interference may well be justified, some would say that following the recent ‘election’ in Russia, misusing AI is less worrying than more direct forms of influence. Also, although protection against AI misuse in elections is needed, a balance must be struck so that AI is not over-regulated to the point where innovation is stifled.

Tech Insight : DMARC Diligence (Part 3) : Implementing and Optimising DMARC for Maximum Security

By Blog, News No Comments

In this third and final part of our series of ‘DMARC Diligence’ insights, we explore the detailed process of DMARC deployment, its monitoring, optimisation, and preparing businesses for future email security challenges.

Last Week … 

Last week in part 2 of this series of ‘DMARC Diligence’ articles, we looked at the crucial yet often neglected aspect of securing non-sending or “forgotten” domains against cyber threats. Here we highlighted the potential risks posed by these domains when not protected by DMARC policies, and offered some guidance on how businesses can extend their DMARC implementation to cover all owned domains, thereby preventing unauthorised use for spam or phishing attacks.

This Week … Implementing DMARC: A Step-by-Step Approach 

As noted in the previous article in this series, implementing DMARC is now critical for UK businesses to protect against threats like email spoofing and phishing.

To briefly summarise a step-by-step approach to implementing this, businesses can start by ensuring Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM) are correctly set up for the domain(s), as DMARC relies on these for email authentication. Next, it’s a case of creating a DMARC record with a policy of “none” to monitor traffic without affecting it. This record is added to your DNS.

Over time, it’s important to analyse your DMARC reports in order to identify any unauthorised use. Finally, gradually shift your policy to “quarantine” or “reject” to block or flag unauthenticated emails, enhancing your email security posture. Looking at this approach in a bit more detail, implementing DMARC means:

– Understanding SPF and DKIM. Before implementing DMARC, ensure you have SPF and DKIM records correctly set up for your domain. These records help in email verification and are crucial for DMARC to function effectively.

– Creating a DMARC record. Draft a DMARC TXT record for your DNS. Start with a policy of ‘none’ (p=none) to monitor your email traffic without affecting it. This stage is critical for understanding your email ecosystem and preparing for stricter enforcement without impacting legitimate email delivery.

– Analysing the reports. Use the data collected from DMARC reports (Aggregate reports – RUA, and Forensic reports – RUF) to identify legitimate sources of email and potential gaps in email authentication practices.

– Gradually adjusting policy: Gradually adjust your DMARC policy from ‘none’ to ‘quarantine’ (p=quarantine) as you become more confident in your email authentication setup. This move will start to prevent unauthenticated emails from reaching inboxes but may still allow them to be reviewed.

– Full enforcement. Once you’re assured that legitimate emails are correctly authenticated and not negatively impacted, shift your policy to ‘reject’ (p=reject). This is the final step where unauthenticated emails are actively blocked, providing full protection against phishing, and spoofing under DMARC.

– Continuous monitoring and updating. Email authentication landscapes and practices evolve, so it’s crucial to continuously monitor DMARC reports and update your SPF, DKIM, and DMARC settings as necessary to adapt to new email flows, domain changes, or security threats.

Monitoring and Reporting – The Key to Effective DMARC 

For businesses, effective DMARC implementation relies heavily on consistent monitoring and reporting.

Why? 

By analysing DMARC reports, businesses can gain insights into both legitimate and fraudulent email sources using their domain. This process not only helps in identifying authentication failures but also in refining DMARC policies over time (as suggested in the step-by-step approach above) for better security.
Remember, regular reviews of these reports is essential for adapting to new threats and ensuring email communication integrity.

Optimising DMARC Policies 

Optimising a DMARC policy involves fine-tuning it to create a balance between security against spoofing and phishing, and ensuring legitimate emails are delivered smoothly.

But How? 

The starting point (as mentioned above) is the analysis of your DMARC reports to identify authentication failures and adjust your SPF and DKIM setups accordingly.

A Phased Approach 

Taking a phased approach, i.e. gradually increasing the DMARC policy from ‘none’ to ‘quarantine’ and then to ‘reject’ as confidence in your email authentication improves, is the way to minimise potential disruptions to legitimate email flow while maximising protection against unauthorised use of your domain.

Future-Proofing Your Email Security Strategy 

Going forward, looking at ways to future-proof your business email security strategy, these could include:

– Keeping up to date with emerging threats and trends in email security (continuous education).

– Implementing advanced security technologies like AI-driven threat detection can offer proactive protection.

– Regularly reviewing and updating your email authentication protocols (SPF, DKIM, DMARC) to adapt to changes in your email infrastructure.

– Fostering a security-aware culture within your business e.g., using training to recognising phishing attempts and safe email practices.

– Engage in industry forums and cybersecurity communities to help stay ahead of evolving email threats and to gain and share information about best practices.

What Does This Mean For Your Business? 

For UK businesses, implementing and optimising DMARC, as outlined in this final instalment, is a commitment to safeguarding email communications that benefits your business and your customers. Taking a step-by-step approach, as outlined above, from establishing SPF and DKIM records, through to DMARC policy enforcement, are now crucial for building an effective defence against email spoofing and phishing (these are now major threats). Taking the phased approach of regular monitoring and gradual policy adjustments ensures that businesses can not only react to current threats but also proactively adapt to emerging challenges. This strategic approach to email security is essential in maintaining the trust of your customers and partners, protecting your brand’s reputation, and complying with today’s data protection regulations. It’s also worth remembering that actively engaging in continuous education and leveraging advanced technologies are ways to stay ahead in the fast-evolving cybersecurity landscape.

Tech News : Bogus Bitcoin Boffin

By Blog, News No Comments

A High Court judge has ruled that Australian computer scientist Dr Craig Wright is not the inventor of the Bitcoin cryptocurrency, despite him claiming to be so since 2016.

Real Bitcoin Inventor A Secret

The challenge with trying to conclusively identify Bitcoin’s inventor is that, from the outset, Bitcoin’s creator has only been known by the pseudonym Satoshi Nakamoto and they have chosen to keep their real identity hidden. Also, the creation and early development of Bitcoin were done under this pseudonym, with all communications conducted online via emails and forums. With the additional complications of Bitcoin being a decentralised currency (i.e. not controlled by any single entity or individual) and the fact that no definitive evidence from numerous investigations has been found linking the pseudonym to a real individual or group of individuals, it’s possible to see why many people have claimed (or suspected) to be Bitcoin’s inventor.

Dr Wright

Dr Wright, who has claimed to be Satoshi for almost 8 years (challenged many people in court who have disputed his claims) has had his evidence questioned by cryptocurrency experts for some time now.

The Court Case

The recently concluded case against Dr Wright was brought by a consortium / alliance of Bitcoin companies called the Crypto Open Patent Alliance (COPA) as a way to stop what has been described as Dr Wright’s campaign of intimidatory lawsuits against anyone challenging his claim to be Bitcoin’s creator. The case was held at the Intellectual Property Court (a division of London’s High Court). There, the judge declared that the evidence against Dr Wright being Bitcoin’s creator is “overwhelming.” The four key declarations made by the judge (prior to writing the full, lengthy ruling) were that:

1. Dr Wright is not the author of the Bitcoin White Paper.

2. Dr Wright is not the person who adopted or operated under the pseudonym Satoshi Nakamoto in the period 2008 to 2011.

3. Dr Wright is not the person who created the Bitcoin system.

4. Dr Wright is not the author of the initial versions of the Bitcoin software.

Forgery For Fraud?

COPA’s KC, Jonathan Hough, accused Dr Wright of backing his claim with forgery ‘on an industrial scale’ and of trying to use the courts (through his many legal challenges) ‘as a vehicle for fraud’.

So, If Dr Wright Didn’t, Invent It, Who Did?

Over the years, there’s been a great deal of speculation as to the true identity of Bitcoin’s creator(s). Figures who have been suspected (although none have been proven) include:

– Dorian Nakamoto. In March 2014, a Newsweek article identified Dorian Prentice Satoshi Nakamoto, a Japanese-American physicist and systems engineer, as the Bitcoin creator. This speculation was based on similarities in name and background. Dorian Nakamoto has since denied any involvement with Bitcoin.

– Hal Finney. Hal Finney was a cryptographic pioneer and the second person (after Satoshi) to use the Bitcoin software, file bug reports, and make improvements. He also lived only a short distance (a few streets away) from Dorian Nakamoto. Finney denied being Satoshi but suspicions about him remain due to his early and deep involvement with Bitcoin and his background in cryptography.

– Nick Szabo. A computer scientist, legal scholar, and cryptographer known for his research in digital contracts and digital currency. He developed a precursor to Bitcoin called “bit gold” in 1998, which shared many similarities with Bitcoin. Szabo has consistently denied being Satoshi.

– Wei Dai. Another figure linked to Bitcoin’s creation is Wei Dai, the creator of “b-money,” an early proposal for an autonomous digital currency mentioned in the Bitcoin whitepaper. Dai’s involvement in the cypherpunk movement and his innovative ideas about digital currency led some to speculate about his possible involvement with Bitcoin. However, Dai has denied being Satoshi.

What Does This Mean For Your Business?

As highlighted in COPA’s comments after the ruling against DR Wright, developers in the Bitcoin community may have felt for many years as though they were being bullied and intimidated by Dr Wright and his financial backers’ many challenges to those who questioned his assertion that he was Satoshi Nakamoto. The ruling, therefore, is likely to have brought them some satisfaction and some peace, plus the hope that the legal challenges will now cease. Also, some see the ruling against Dr Wright as a win not just for the truth, but for the whole open-source community which is known for its focus on collaboration transparency, freedom, and inclusivity.

It’s also been noted that the judge’s willingness to comment on the outcome prior to the full written judgement being released is unusual and may be taken as a sign of how solid and sure the judgement was in this case.

Possible reasons why Bitcoin’s real creator has chosen to remain anonymous could include avoiding legal and personal repercussions, maintaining the decentralised ethos of the currency, and protecting their privacy and security. It may have been all part of what appears to be some very successful original planning on their part.

The culmination of the case coincided with Bitcoin reaching its highest value of $69,000 recently which the real inventor of the currency is, no doubt, privately enjoying.

Tech News : Chrome’s Real-Time Safe Browsing Change

By Blog, News No Comments

Google has announced the introduction of real-time, privacy-preserving URL protection to Google Safe Browsing for those using Chrome on desktop or iOS (and Android later this month).

Why? 

Google says with attacks constantly evolving, and with the difference between successfully detecting a threat or not now perhaps being just a “matter of minutes,” this new measure has been introduced “to keep up with the increasing pace of hackers.” 

Not Even Google Will Know Which Websites You’re Visiting 

Google says because this new capability uses encryption and other privacy-enhancing techniques, the level of privacy and security is such that no one, including Google, will know what website you’re visiting.

What Was Happening Before? 

Prior to the addition of the new real-time protection, Google’s Standard protection mode of Safe Browsing relied upon a list stored on the user’s device to check if a site or file was known to be potentially dangerous. The list was updated every 30 to 60 minutes. However, as Google now admits, the average malicious site only actually exists for less than 10 minutes – hence the need for a real-time, server-side list solution.

Another challenge that has necessitated the introduction of a server-side real-time solution is the fact that Safe Browsing’s list of harmful websites continues to grow rapidly and not all devices have the resources necessary to maintain this growing list, nor to receive and apply the required updates to the list.

Extra Phishing Protection 

Google says it expects this new real-time protection capability to be able to block 25 per cent more phishing attempts.

Partnership With Fastly 

Google says that the new enhanced level of privacy between Chrome and Safe Browsing has been achieved through a partnership with edge computing and security company Fastly.

Like Enhanced Mode 

In its announcement of the new capability, Google also highlighted the similarity between the new feature and Google’s existing ‘Enhanced Protection Mode’ (in Safe Browsing) which also uses a real-time list to compare the URLs customers visit against. However, the opt-in Enhanced Protection also uses “AI to block attacks, provides deep file scans and offers extra protection from malicious Chrome extensions.” 

What Does This Mean For Your Business? 

As noted by Google, the evolving, increasing number of cyber threats, the fact that malicious sites are only around for a few minutes, and that many devices don’t have the resources on board to handle a growing security list (and updates) have necessitated a better security solution. Having the list of suspect sites server-side and offering real-time improved protection kills a few birds with one stone, allows Google a more efficient (and hopefully effective) way to increase its level of security and privacy. It’s also a way for Google to plug a security gap for those who have not taken the opportunity to opt-in to its Enhance Protection Mode since its introduction last year.

For business users and other users of Chrome, the chance to get a massive (estimated) 25 per cent increase in phishing protection without having to do much or pay extra must be attractive. For example, with phishing accounting for 60 per cent of social engineering attacks and, according to a recent Zscaler report, phishing attacks growing by a massive 47 per cent last year, businesses are likely to welcome any fast, easy, extra phishing protection they can get.