How many adolescents die from bike accidents?
How many adolescents die from bike accidents?
But teens are not primary school and are far more than 13 and 14. Why would you ignore 15-19? It seems like your point only covers a minority of cases in which case any recommendation will have a minimal impact. Why are you so concerned about a minority of cases?
#1 killer of teens is dangerous driving most often influenced by peer pressure. Removing the peers by putting them ona bike would reduce the teen mortality rate by far more than the mortality rate of teens on bikes going over 30mph. See, stats can be used in many ways. Not always supportive of your opinion. Which is why it is important to choose a source that specifically relates to the topic. If you don’t want it pointed out that your source is irrelevant to the discussion.
Your welcome to your opinions. I’m just pointing out a study of primary school children is irrelevant to this particular thread. If you have studies on teens, I’d love to read them.
That’s primary school children. Not teens.
Indeed, by pretending to ignore what I wrote but devoting time to putting in a reply, having no basis for your mistaken assumptions and following up with insults that only relate to your own behaviour, I’m finally comfortable calling you out as unfit to be CEO of your own skid marks. Couldn’t even troll your own turds.
Yeah, choose ignorance. We’ll both be happier.
“Learn more about how to keep yourself safe by testing your instincts below and guessing whether each instance is a scam, using real-life examples.”
Distinctly not saying to research online and verify information.
As for tests outside academia, such as this one, even a bone headed dunce understands tests test the knowledge and ability you have, and not what you google online. To the point that if a test allows you to use other sources, that is always specifically stated. So that normal, reasonable people do not treat it as a normal, reasonable test, and complete it with their inherent knowledge and ability. I’m sorry you missed this valuable and important life lesson in learning. Explaining in the answers that you should have known to use outside sources is exactly as I have stated; a bad test.
“The best test phishing emails realistically emulate actual phishing emails. Intentionally adding errors only serves to train employees to catch bad phishing attacks.”
I’m glad as a CEO you don’t actually produce any content for your company. Emulating phishing emails means including the errors that are in phishing emails. Those are the ways you train people to recognise a phishing email. If you don’t include the errors then the only true verification of a genuine/phishing email is verifying with the purported sender by another communication channel. Not at all an effective policy, I’m sure you would agree.
No one’s butt hurt here. Treating a genuine email with caution and wariness is inherent good phishing awareness behaviour. If you can pull your vacuous head out of your voluminous arse for a moment, you will realise that once again, this is a bad test, a bad quiz, not an effective teaching tool, and just plain old click bait. Disparaging it is an appropriate response, and a fucktard such as yourself, with your vaunted claims of related professional acumen, trying to defend it is reprehensible.
You’re just pointing out that you are overqualified for this test.
At its root, it is a TEST. Not many TESTs allow you to Google for answers and supporting information. Unless specified any TEST provides in the question the information to determine the answer. By not providing all the information and not informing you to utilise any source available to obtain extra ESSENTIAL infirmation, it’s a bad test. Intended to trick you.
You and I both know if we create a test phishing email with no mistakes, it’s not a failure if people click on it. It’s a failure on our part for creating a BAD TEST. Same concept.
In the phishing Awareness course I wrote and sell, I do advocate to confirm that domains, phone numbers and other contact details, logos, are correct with the official website.
I don’t advocate that when they receive a bill for something they know they didn’t buy, they should go to Google.
And with googles current state, I could easily buy a domain and buy ads to put it at the top of search results. Googling the answer isn’t actually the answer. Verifying against known legit sources is.
It’s a shit test, which more than half of the people in this thread got right, yourself excepted.
I mean, they are two different aspects of security. Pen testers are important, but they can’t help you if an employee clicks on the wrong link.
While yes, that’s an accurate quip, it actually does highlight a deeper issue in the industry. If everyone passes your scam test, they don’t need to buy your scam test.
Additionally, scam emails aren’t 50/50 yes/no pass/fail. It’s more a combination of red flags to gauge how risky the email is to click on links, reply to, download attachments from, etcetera.
Currently the scam testing industry has no way to rate an individuals ability other than how many scam emails they did or didn’t click on. That is a false metric. It incites scam testers to trick people to justify their value to the customer.
Yep. It relies on information not present in the example. It’s intended for most people to get wrong.
Similarly the Facebook one genuinely looks like a scam unless you know of the Facebook case.
Ah okay, I thought you were killing the “bosses” without following the story prompts. Sounds like you are following the prompts so ignore me.
I hear ya, but you’re missing major chunks of side quests and backstory that way. It’s a fantastic story when you play it the way it’s intended.
If it can’t see numbers, then it isn’t as smart as your $5 calculator or the majority of the human race. If you can convince it it’s wrong, it’s even more distinctly less intelligent.
It barely passes as a language model and only passes as a conversational model. Having citations doesn’t mean it understands citations. Having incorrect citations quite simply proves that it doesn’t understand what a citation is meant for. It does not understand the concept.
ChatGPT is pre trained on a number of directories. All of them sampled pre 2021. Nothing after that date exists for chatGPT. That isn’t intelligence. It doesn’t possess the intelligence to understand the nature of its databases. And if you really don’t think the databases it was trained on came from the internet, please show us a source.
It’s continually entertaining how you continue to point out the substantial limitations of a language model AI and yet insist it’s showing more intelligence than an average brain that’s has none of those limitations and achieves more accurate and better results any minute of every day. And then claiming it understands concepts when that concept itself is not part of its architecture is really astounding. I can almost identify the exact neuron that’s misfiring in your brain.
While it’s humorous how personally you are taking critiques of, chatGPT, it is unfortunate you are also demonstrating a fundamental lack of basic understanding of how ChatGPT works. Because of that, you have inflated what you believe chatGPT is doing.
Even when it gets basic maths wrong repeatedly. Because I can tell it 2+2=5 and it will agree with me. Conversationally. Since it has no concept of what 2+2=5 means.
Even though it has no memory of previous conversations, you believe it somehow retains understanding of concepts it discusses.
Even though it searches the internet to provide it the knowledge to answer questions, which is why it can cite sources that don’t exist or don’t support its claims, clearly demonstrating a fundamental lack of understanding the concept, or even the concept of citing sources.
Even though it was literally trained by humans telling it what the three most correct conversational response would be out of the 5 answers it gave every calibration question, you still believe it actually possesses intelligence above any human, who can have a conversation without making any of these mistakes.
I clearly put chatGPT “intelligence” as remarkably low as is possible, even non-existent. I also must concede in this situation it is smarter than at least one human I am aware of.
New knowledge is simply creativity which AI distinctly do not have. The shoelace and Rorschach are variations of the same point. ChatGPT regurgitates info from the internet and uses confirmation bias to present it conversationally. ChatGPT cannot understand the concept that a shoe has a lace that should be tied. It can only answer a question about that by using prepublished information related to tying shoelaces. As for Rorschach, even with a visual component, ChatGPT is by its nature incapable of interpreting the data itself. It is quite simply not what the engine does.
Understand what ChatGPT does, do not project your idea of what an AI can do onto its single occasionally accurate trick.
Can it tie a shoelace? No. If you gave it manipulators and a shoe, would it tie the laces? No. Can it do a Rorschach test? No. Can it create a new idea? No.
It can barely pretend to talk reasonably about these things because it is only designed to talk reasonably about anything. That is not intelligence.
Removed by mod