On Failure in Metallic Materials

As a continuation of my series on composite materials and health monitoring, I wanted to talk about failure in composites. In writing it, I decided that first I needed to talk about failure in metallic materials. In writing that, it turned out that it was long enough to be a separate post by itself. So here it is, a small primer on failure, especially in metallic materials.

We’ll talk about composites next time.

What exactly is "failure"?

A component is said to have failed when it can no longer perform the task that it was designed for. Failure does not necessarily mean breaking, although sometimes it might. Failure in an engineering sense has as much to do with “what the designer intended” as with “the physical structure itself”.

For example, a bridge may be getting old and developing some cracks here and there. At what point do you say that the bridge is “unsafe for use”? The design and engineering teams set up some criteria to evaluate the structure. For example, they might say that “any cracks detected must not be greater than so-and-so length”. This does not mean that the bridge is going to break apart when a crack of that so-and-so length appears. It just means that the engineers are no longer satisfied with how the bridge may hold up in the future. Hence, the bridge component that developed the big-enough crack will be said to have failed.

Tacoma Narrows Bridge

Tacoma Narrows Bridge. (Source)

If the above paragraph seems to convey unnecessary caution on the part of the engineer (why call the bridge unsafe if it isn’t breaking up?), consider that a bunch of reasons go into making such decisions. As an example, the engineers may consider their ability to detect every crack. The engineering team may consider the possibility that they could not detect some defects. What is the probability of a serious defect not being detected?

And there’s good reason to be cautious – if they get it wrong, bridges do collapse.

How do metallic materials fail?

In the previous section, we have been talking about cracks. Here’s why they form in the first place. Cracks form when the load on a given region of a component (i.e. stress, = force per unit area) becomes higher than what the material can handle. This may be because an unexpected amount of load was put on the structure that it was never designed for. It may also be that the capacity of the structure to withstand stresses has diminished over time as the component has aged. In any case, when the stress is too much for the component to bear, the component fractures and develops a crack. The particular mechanics of the fracture itself is a vast area of study in itself, and is way beyond the scope of this piece. Suffice to say, that crack formation weakens the component, and the larger the crack gets, the worse in condition the component becomes. Ultimately, the crack will grow large enough that the component will break into two, and will be unable to take any load at all.

Crack propagation under fatigue loading

Crack propagation under certain conditions. (Source)

For metallic components, since the material itself is nominally homogenous (nominally, because nothing can be perfectly homogenous, but for all intents homogeneity may be assumed), the crack that ultimately causes the material to fail usually occurs where the stress happens to be the greatest. Further, as I mentioned above, the formation of a crack weakens the material, and so once a crack does form, any further worsening in that region accumulates around the same crack (weak zone) instead of creating new cracks all the time. “Where the stress is greatest” usually depends on the geometry of the component, on how the loads are distributed, and, indeed, on tiny variations in the homogeneity of the material itself.

Crack propagation in glass shot at extremely high frame rate. (Source)

For metals, therefore, the mantra for evaluating the component may be condensed as: “follow the cracks”. Wherever a crack seems to be worsening, is where final failure will most likely occur.

That’s it for today’s discussion on crack propagation; next time we’ll get to what I had actually set out to discuss – failure in composites.

On India’s World Cup performance (they lost today)

It’s always gutting to see your team lose, isn’t it. Gutting, and infuriating. “They should have won! If only they’d played better!”

Let’s think back though, to the beginning of the World Cup, before a ball had been bowled. Remember those days, just after the triseries with Australia and England? What if someone had said then that India would reach the semifinal? We’d have smirked. “With this team? This bowling attack?” Winning 7 games on the trot? Smirk. 70 wickets in 7 games? Best economy rate as a bowling unit? Cohesive batting performance from the entire unit? Fast bowlers bowling with pace and discipline? Smirk; smirk; smirk.

India have done well to reach the semifinals. They’ve been an excellent team. Their flaw today was that they were not a great team. But that’s okay, being excellent isn’t half bad.

Yes, they had a collective off-day. The bowlers sprayed it around a bit, uncharacteristically. The batters got out in inopportune moments, uncharacteristically. Dhawan usually scores big once he gets a start (and gets a catch dropped). Kohli usually gets himself in and ups his scoring rate, and doesn’t get out at all. There’s usually always Rahane, and even Raina has scored a hundred this world cup. Usually; just not today.

They came across a genuinely better team today, and lost. No shame in that; that takes nothing away from their excellence. Then too, they actually brought Australia back from what looked to be a certain 360+ score. That’s something in itself, no?

Also, a thought: how many teams have defended their world cup titles successfully? West Indies in the 1970s, and Australia in the 1990s and 2000s. It needs a great team, not merely an excellent one, to be able to defend trophies across four year periods and in different conditions. Would we call this Indian team “great”, comparable to the West Indian and Australian teams of before? Definitely not, right? Not yet. Maybe with time and more experience, and maybe a couple of different players, but certainly not yet.

So they came across a better team. They lost. So what? They played well until they lost; they played with pride and with skill and with passion and with excellence.

They kept the Tricolor flying high. Let’s be proud of that.

Thoughts on Interstellar, the movie (spoilers!)

This post contains spoilers. Please go watch the movie, and then read this. Seriously, go watch. This is an all-time movie, and I have a feeling this will stand the test of time and gain even more popularity as time goes on. This one is that good.

I just had to jot down some thoughts about the movie — and particularly the science therein. One, I’m interested in this stuff, and can’t help it. But also, two, I heard people pooh-poohing it away, and I didn’t like that at all. So here goes.

In summary: I loved it. This is what science-fiction is supposed to look like — a combination of adventure and extrapolation of real science into the unknown. I’ve been reading and hearing some of the negative ‘reviews’, and it seems to me that most of it revolves around “hey, that’s not science, it’d never work that way!” The great thing about this movie is most of the ‘extrapolations’ are in directions that are truly unknown, and until science does cover those areas, your, mine, and the Nolans’ imagination is as good as anyone’s.

  • I loved the scientific accuracy. I usually hate it when films get their premises wrong. This one got the science right — mostly. (We’ll go into details soon.)
  • I loved that a wormhole near Saturn wouldn’t just appear out of thin vacuum. I loved that they didn’t create some bunkum theory for man to create a wormhole, and just went with “we don’t know”.
  • I loved that tidal wave on the first planet. That wasn’t just there as a plot point — being near a black hole is supposed to cause that. Giant tidal forces should be a norm near a black hole, and it was.
  • I did NOT like the fact that a planet could exist so near to a black hole. I think they showed the planet to be basically situated right near the ‘edge’ of the black hole, and at that distance, with a planetary mass, the tidal forces should work on the solids too, and basically tear the planet apart. Crucially, it’s not impossible, though. (Here again, they’re stretching the limits of the science, at most. Lovely.)
  • I did NOT like the fact that 23 years elapsed on the spaceship in orbit, when only a few (three?) hours elapsed on the surface of the first planet.

    Remember, it’s not the gravity of the planet that’s causing the time-dilation, but the nearby black hole. So the premise is that they ‘parked’ the orbiting space craft at such a distance that time dilation effects were negligible, when compared to that at the planet surface.

    Now remember that they took 8 months to travel to Mars, and 2 years to travel to Saturn. Granted, they were using gravity assists and not direct thrust, so the interplanetary voyage took longer. But it gives an idea of the orders of magnitude involved here. Let’s say that using direct thrust of the smaller craft they can travel the distance between Earth and Saturn in 3 hours. Fair? (If I’m doing the math right, Earth-Saturn is about 1.5 light-hrs, so light would take 1.5hrs to get there.)

    There’s no way that that distance causes a relative time-dilation of 23 years in the vicinity of a black hole, and does not pulverize the planet itself. That right there was all wrong. - I did NOT like that Coop is thrown back into 3D space through the same wormhole that they originally went through. A wormhole is supposed to be like a tunnel. You go in one end; you come out another end. They went in one end (Saturn), and came out somewhere in the vicinity of the black hole, but not out of the black hole itself.

    If so, why would another wormhole originating inside the black hole lead back to an opening to a separate wormhole near Saturn?! That did not sit well with me. - I loved the black hole and event horizon sequences and visualizations. I’ll trust Dr. Kip Thorne and Cornell University grad students that they got the details right. It all looked amazing. - I loved the treatment of time as “just another dimension to travel through”. Seriously, we don’t really know what happens inside black holes, and one possibility is indeed that 4-dimensional spacetime can be mashed together. Beyond that, as I mentioned earlier, it’s an artist’s realm, and I liked what they did with it. It was convenient, of course, that the particular area of spacetime that Coop confronted from within the black hole was precisely the spacetime that he needed to confront, but we can allow that much cinematic coincidence, can’t we? - I did NOT like the idea of conveying through Morse code, via the seconds hand of a wrist watch, no less, experimental data regarding quantum mechanics and relatively. (What other ‘data’ would the robot ‘collect’ from within a black hole?!) I wish they could find a more ingenious way to achieve this. - I am okay with the idea that they only needed experimental data from within the black hole to complete their theory of quantam gravity. Perhaps they had a bunch of ‘general solutions’ and the experimental data allowed them to arrive at ‘particular solutions’. Not beyond the imagination, by any stretch. - Hans Zimmer, take a bow. Such a brilliant score! - I have to watch the movie again. I don’t think I’ve taken it all in, in one sitting. Christopher Nolan, keep making movies.

On avoiding plagiarism

I’ve been a student panelist at the Virginia Tech Graduate Honor System (GHS) for a few years now, and by far the most frequent infractions students are accused of involve some form of plagiarism. In some cases, alas, the students seem perfectly aware of what they’re upto, but very often, it seems that they just didn’t realize that what they were doing was anything wrong, or indeed, anything out of the ordinary.

Unfortunately, whether you knew and understood or not, if you did it, well, you did it. On that note, here are some pointers on avoiding plagiarism.

Let me focus on writing in particular, even though this applies equally well to any other creative task. If I had to summarize the concept in one sentence, I’d put it this way: when you’re writing something, there should be no ambiguity in the reader’s mind as to who actually composed the words in different portions of your document. If the document header contains your name, the assumption is that you wrote it, unless you specify otherwise. You’re perfectly fine using material from other sources and authors—as long you make it explicitly clear as to the authorship of that material.

Let’s say you’re referring to Wikipedia to understand a particular terminology or concept to include in a paper. You have one of two options:

  • Cite Wikipedia as a source, and use the words from Wikipedia within quotation marks.
  • Or, read and understand the material (but don’t memorize it word-for-word), and then close the webpage. Now try writing about the concept that you just read about. Or better yet, come back in an hour and write about it. Chances are the words you write are your own words and your own understanding, even though you read about it on Wikipedia. You should still cite Wikipedia as the source of your information, of course.

    If, while writing, you find yourself having to refer to the Wikipedia article to “refresh” your memory of the language used, you’d better cite the article and include the relevant portions verbatim, and within quotation marks.

The quotation marks are important in addition to including citations. This is because, as I mentioned above, citations are a must as sources of information, even if the words and compositions are your very own. If you don’t use quotation marks, it appears, of course, that the words are your own. If they aren’t, guess what you’re guilty of!

Also, remember to be sparing in using material verbatim from sources. A couple of sentences at most, and in rare occasions, perhaps a paragraph or two. If you’re using a paragraph, enclose the entire paragraph in quotation marks, and/or consider italicizing or indenting the paragraph to distinguish from your other paragraphs. Remember, your article is your own, and should almost entirely comprise your sentences. (This seems like a no-brainer, but I’ve seen instances where almost the entirety of a write-up has been “compiled” from various sources.)

There are some excellent resource on the internet about avoiding plagiarism. Here’s one: http://www.plagiarism.org/plagiarism-101/what-is-plagiarism.

To repeat once again, there should be no ambiguity as to the authorship of any portion of your work. Make it clear and cite the source, and you’ll be fine. Please, don’t get caught in embarrassing situations only because you didn’t know better. :)

Apple’s Curse

Last Fall, Apple included a fingerprint sensor in its latest iPhone, and called the technology ‘TouchID’. A few days ago, Samsung did the same, including its own fingerprint sensing technology in its latest Galaxy S5 phone.

The blogosphere has been aflutter about one small difference between the two launches: when Apple launched their technology, there was a huge uproar about the implications of using fingerprints as an authentication tool. As everyone has been pointing out, even a US Senator, Al Franken, issued a public email (PDF) addressed to Apple asking for clarifications and explanations regarding the technology and its implications. In contrast, Samsung’s new technology has received no such attention.

Apple: Think Different

Apple: Think Different.

Here are my two cents on the reasons:

First, Samsung’s technology has followed Apple’s by almost half a year. In this time, users have had the chance to use the technology first hand, and have realized that the world has not, in fact, turned on its head. Thus, a similar technology from Samsung, even though its implementation is way more far reaching, has not brought any new questions or paranoia.

Second, and more importantly, Apple is a modern cultural and technological icon. There’s no way around it—anything that Apple does is subject to way more scrutiny than any of its competitors. And this is because—even if subconsciously—we have come to expect greatness, caution and prudence from this company and its products. Hence, for example, the attention to Apple’s Chinese production lines, even though every other technology company uses the same companies, and the attention to Apple’s tax schemes, even though every other company does the same—perfectly legal—thing.

This is how it should be.

Apple has shown itself to be the leader of the pack, the pioneer of modern technology; the company to follow, imitate and plagiarize from. Apple has similarly shown leadership and candor in other matters: when scrutinized about their tax practices, they actually suggested tax reforms; when a bill to solidify employee non-discrimination was on the table, Apple endorsed it, and their CEO Tim Cook personally talked about it. And, oh, even without a law being present, Apple of course already had in-house rules of similar effect, as a matter of principle; it simply would not do for it to be otherwise.

With great power, as they say, comes great responsibility, and this extra scrutiny, the extra attenion and paranoia, is the flip side to their influence and strength—their curse along with their blessing, if you will.

May Apple carry this curse with pride and with distinction, for times to come.