Quick Page Navigation
Subject versus Device
Multimedia #1 - Overwhelming the Senses
Multimedia #2 - Text, Audio and Visual
Media in Media
Partners With our Brains
Sacred 24, the "Cinema" Spec
Storage - Archiving and Access
Disclaimer
Before "Blended"
Learning the ballet way
Ed Tech - Personal
Local or Outside
Rolling Your Own Ed Tech
Getting Across
Narrative
Facts, Sacred Facts and Methods
Focus on Information
Tools of the Trade
Camera Basics, No Frills
Hardware Usability
Fitting the Tools to Humans
Delivery
Web Pages
Anatomy of HTML
HTML Files
CSS Stylesheets
Scripts - javascript
Downloadable Fonts
Formatting for Devices
Page Weight
Servers and Platforms
Images
Lighting
Audio
33 1/3 LPs
Gain Staging
Video for Subject
Rules for Shooting
Video Streaming
Powerpoint
New Media
Review Quiz

"Construction" Note:

I vastly expanded this page as a starting point to make a video with several areas I see as overlapping parts of what I do. Dance photography/videography, Education Tech, Learning Concepts, Journalism (subject knowledge and "getting across," computer programming and web programming. These are all parts of my life and work history and every photograph I take incorporates all of them as well as history.

This file, as it is now is way too unwieldy for practical use or even to keep topics organized so my plan is to break this into a series of separate documents, with links in the panel to the left, and to interweave the existing panel links into a whole.

For now, wade through. I'll evolve this into an instructional script, or a set of such scripts. I also intend to setup a couple of self-published books on the web as well as a lesson set in multiple documents. With all the added material, this is rough for now. My apologies. Better is coming.

Cheers,
Mike Strong

 

MultiMedia in Practice

Subject versus Device

"Subject Knowledge" is the most important camera skill.
Everything flows from that.

Don't get me wrong about the term multimedia. I use it casually, often enough. Still, there is a certain meaninglessness about the word "multimedia." I can't imagine, showing up to shoot a show or event, bringing "multimedia." I have never worked a concert where someone was looking for their multimedia rather than their stills or video or slides or mic levels and placement, sound checks and so forth.

At the same time, we've reached a point in media, in general, where we lose contact with content or meaning as we let fast cuts in movies and video dominate, they prevent our ability to register content. This is basically a montage technic gone stupid. Montage creates a feeling without letting you hold on to detail. This also promotes extended long takes (easier to do in digital) which exist for the glorification of the mechanism, pushing away thought and theme and concept. In radio, the aversion to the tiniest quantity of "dead air" leaves neither room nor time for the imagination which radio can promote as we fill in the non-aural senses.

The central mistake people make when looking at a photograph or movie is to assume that a camera created the picture, as if a brush painted a picture or a stove cooked a meal. Most photographers, even when shooting any kind of specialty, such as dance, do what I call shooting the camera, rather than shooting the subject. I've yet to see an ad for a photo workshop, regardless of photo topic, which is not concentrated on the equipment, rather than concentrating on subject knowledge.

Without the subject there is no point in the image. For years I've told people that were I to give a dance-photo workshop I would start with a week of tap lessons, beginning ballet and music-listening, then, only at the end, exercises in shooting to the music, in manual mode, using single-drive only. And no cameras until then. The dance lessons need to be first and need to be focused on. I've found over the years that I absolutely have to set my camera aside if I am to get anything from a dance lesson. Once my hand touches that camera, even a little, even once, my brain is lost to the camera. My lesson is lost. If this is a workshop for which you showed up and paid money, your money and your time are lost. Get the dance down first. You are not learning steps. Your body and senses are learning the feel of dance. When you do, at the end, pick up your camera again, your body memory will inform your camera usage.

In the same way, to me, speaking of "multimedia" puts the cart before the horse. While I've been shooting both stills and video for many years, and recording audio, and working with live shows and producing recorded shows, I never think of it as multimedia. I am always thinking of the subject first and then what I need to cover it. The methods needed to cover the subject determine the technical methods. I spend a good deal of time refreshing batteries or recharging batteries, clearing recording media, making sure all camera defaults haven't changed (they can), arranging which mics and cameras I will use and packing them into bags along with the tripods or light stands and any lights I need. I never, ever, think, "I'm getting my multimedia together."

The same when I tear down, load out and return home to download recording media, then synch and edit. These are just the means to store and edit the results from the show or other event. Plus that, I am always making some sort of change in the "kit" I have available to work with. Maybe I should just call it "the kit-keeper job."

Long ago I grew a bit weary of being excited with each new "thing" or method and lusting after each new device. Computers and software have repeatedly been sold as a sort of magic elixer to cure anything. Now I wait a bit to determine whether it is really a new product, just a new package, new paint on the old package or flat out snake oil? Sometimes the "new paint" is a better version and sometimes a worse version (such as Windows' changing their default image-viewing and video-viewing applications which are far slower and longer to boot than the old ones).

Many words can be used for the media mix which forms a single presentation, whether it is a standard stage mix or overwhelms the senses. Here is a quick list off the top of my head. I won't even try to pretend it is complete or definitive.

  • Environmental Visual Jukebox (the first description of the event producing our term),
  • Illustrated Music,
  • Multimedia (our term),
  • Multi-media (early spelling of our term),
  • Music-Cum-Visuals,
  • Discotheque,
  • Psychedelic,
  • Ambient Video (from the DJ's projections at a local social dance for New Years 2020),
  • Massed Media,
  • New Media, generally used to describe news media on the web in the late 90's to early 2000's, but largely a defunct term now,
  • Rich Media,
  • Multi-Platform (claim seen on a video documentary),
  • Multipurpose and I'll just add in multi-task as a related concept (really, mostly, time-sharing)
  • --- and, remember the term ---
  • Audio-Visual (AV).
  • this: "Director of Visual and Immersive Experiences." - title for a position at National Geographic

Or the newer ideas of combining live cameras, projections, and live stage as part of a single live performance where the players merge between the screen and the stage (as in my "Tour of the Bolender Center" for KC Ballet in 2011 - http://www.mikestrongphoto.com/CV_Galleries/VideoEmbed_BalletBall.htm).

Stage locations are left to right across the stage. In person our brains determine who is featured. In camera this just looks weak. The easiest way to think of shifting stage locations for camera is to fold the left-right alightment backward from center stage. Cameras need front to back locations for performers in order to produce an equivalent feel to a live-show.

On stage, locations for players are 90-degrees opposed to locations for camera (left to right for live events, front to back for camera framing). You can see this illustration above.
Go here for an expanded coverage about shooting for dance:
http://www.mikestrongphoto.com/CV_Galleries/PhotoCV_SubjectKnowedgeForDance.htm

 

Multimedia #1 - Overwhelming the Senses

Multimedia is a fuzzy term, first coined in 1966, although the elements of using multiple forms of presentation together have been with us in spectacle and performance throughout recorded history, even from the beginnings of our species, as suggested in the performance of a jungle shaman in "A Man Called Bee." In that film, an anthropoligist films a Yanomano shaman slapping his legs, buzzing his lips and in general giving a performance which was probably mesmerizing to his audience. Watching him I thought he would be great with a fuzz box and a set of lights.

The term "Multimedia" as first used, described a particular event with immersive audio-visual technology flooding the senses.

When I started into serious photography in the summer of 1967 (I'd shot pictures before but no concentration) the term "multimedia" was new by exactly one year. Although the word "multimedia" was coined for an event in the summer of 1966, the impetus for the event was a 1965 Christmas party in Bobb Goldsteinn's Greenwich Village studio. According to wikipedia, Goldsteinn appealed to multiple senses at the same time. He "... created an environmental visual jukebox that illustrated music by surrounding the spectator with manually synchronized light effects, slides, films, moving screens, and curtains of light under mirror balls that kept the room in spin."

In July of 1966 Goldsteinn coined the term "multimedia" to promote his resulting show, "LightWorks at L'Oursin," in Southhampton, Long Island, NY. The next month (August) Variety writer Richard Albarino wrote, "Brainchild of songscribe-comic Bob (‘Washington Square’) Goldstein, the ‘Lightworks’ is the latest multi-media music-cum-visuals to debut as discothèque fare." This was hardly the first time that combinations of stage effects for an event were used. It was the first time they were given the name "multimedia," a label which remains with us today. Later the work of political consultant David Sawyer was termed "multimedia." Sawyer's wife Iris was one of Godsteinn's producers. The term itself quickly merged into the culture and was used in various ways although in the last 30 years the usage has shifted.

The internet takes the term "multimedia" and uses it to describe "documents" and other files ("resources") on the internet, usually accessible via the web as multimedia when combining text, databases, images and video in various combinations. In the meantime theatrical shows seldom use the term. This announcement from my email helps illustrate why I have trouble using the term "multimedia" as if the term clearly defined itself:

Dreamflower Circus
Welcome to Dreamflower Circus where imagination meets reality and science. Immerse yourself in the experience as aerialist soar through the air, contortionists twist in nightmarish shapes,and dancers come in and out of your vision. With live interactive projections, live music, installation art, a virtual reality room, live art demos, and a chance to see what your brain on art looks like. Oh and there will be magic, how else do we bring dreams to life without a little magic.

The "Dreamflower Circus" announcement included an illustration labeled "mixed media." This was not billed as a multimedia show, just a show with a mix of entertainment methods and entertainers performing under a theme title. Which of those show elements would you choose to handle if you are the multimedia person? In my experience that goes to the producer and director or directors to dole out responsibilities and put together the crew who will bring the show together, a themed variety show.

Multimedia #2 - Text, Audio and Visual Elements Combined in a Document

More simply "multimedia" simply refers to the use of various media in a single work, such as a show or web page with text, images, lights, audio and video on any of several "platforms." As generally used today, the media in multimedia are usually electronic or at least electronically created or controlled as well as constrained within the boundaries of the computer or phone or tablet and seldom immersive except for a small, growing and developing "virtual reality" segment. A few examples:

  1. Texts
  2. Audio: (in various forms and combinations, each also considered a medium: wax cylinders, lacquer or vinyl records, wire recorders, magnetic tape [reel to reel, cassette], wav files, mp3 files, live sound, sound tracks [sound on film], sound tracks with video, mono, stereo, 5.1, Dolby, etc.)
  3. Still Images (by themselves each considered a medium: photos, drawings, paintings, and so forth)
  4. Moving pictures:
    1. animations
    2. video
    3. movies
  5. Posters and banners
  6. Post cards
  7. Email, often with mutiple media attached or embedded
  8. Presentations:
    1. title slides
    2. PowerPoint
    3. stage shows
  9. Infographics (which I often find confusing, muddled and self-impressed (as another hyped "technology") rather than using well expressed text information)
  10. Interactive interfaces (mice, touchpads, touchscreens, voting buttons, etc.)
  11. Video connection apps for phone and computer (laptop, desktop)
  12. Live streaming of events from any device including smart phones
  13. Tele and video conferencing, organizing and messaging technologies which seem to sprout faster than dandelions such as:
    1. Apple FaceTime (iPhone & iPad only),
    2. Zoom,
    3. Microsoft Teams, (business and school only, other usage defaults to Skype)
    4. Skype, (Microsoft default for general connection. For school or business, use Teams)
    5. Google Duo,
    6. Google Meet, (formerly Google Hangouts - for members of Google's G Suite [formerly Google Docs] although GMail users can join meetings)
    7. TikTok,
    8. Snapchat,
    9. Dubsmash,
    10. Cisco WebEx,
    11. BT MeetMe (phone with passcode, has scheduling add-ins for MS Outlook),
    12. Facebook Messenger,
    13. WhatsApp Messenger,
    14. Houseparty,
    15. GetVokl,
    16. Discord,
    17. And others ...
  14. Virtual reality and immersive 360 environment
  15. Video combining film, stills, drawings, paintings, semi-animations of stills to embedded drawings or paintings with changing visual animation (as labeled for "Unladylike" from American Masters) and constantly changing, demi-montage technique of presentation and cuts with voice overs.

Media within Media

Even closed captions and subtitles, created as auxiliaries to movies and video, are their own media. They support better understanding of the scripted audio, for hearing viewers and to let deaf or hard-of-hearing viewers "see" the audio. In a similar vein are "title" tags and "alt" tags in web pages, offering information to non-sighted readers but also offer other information such as cutline IDs. Essentially another version of the movie. In writing closed captions I found the text-on-screen delivery creates another sense of the script and shifts what you look at. Alt tags and titles, in turn are used by screen readers which adds yet another form of media and another version of the page for a blind visitor. How that mixes with the body text shifts how the page content is revealed.

Media So Common We Forget it is Technology

Don't forget paper (or carved stone), long-proven media technologies which continue to function, for centuries, without batteries or mains. Paper still gives my brain the opportunity to reach out and grab the material. That is how the material gets to us, when our brains reach out to the source, not when a technology attempts to shove it at us. Imagine shoving rolled up paper into a hold in our heads or bashing our heads with the Rosetta stone. Information doesn't work that way. It can't be forced with technology. Or, as I like to say, a camera won't take the picture for you but the wrong kind of camera can get in your way.

Or more recently created media we seldom include as "multimedia" such as PowerPoint, which includes text, animations, video, still images, audio and is often projected to an audience as part of a presentation. It is one of several type of presentation software. It is the direct descendant of title slides, which used to be created using multi-exposure vertical cameras with enlarger heads and litho film exposing long rolls of color slide film, usually Ektachrome and used in a Carousel or other slide projector. Back in the 1970s we did title-slide work in the photo labs I was in. This was gradually supplanted with early digital slides using film printers to create Ektachrome slides or slide strips (1/2-frame strips of 35mm film which pulled through a projector one frame at a time - used a lot in classrooms).

Immersions

Virtual Reality in terms of either goggles or headsets is the latest iteration of VR for the last almost 40 years when game rooms had booths with screens for early electronic gamers. The latest versions are worlds beyond the 1980's but we thought it was great, even back then. At the same time the concept was used in flight simulators, including "Flight Simulator" software which included various airplanes you could fly on your Commodore-64, Amiga and other personal computers. This was greatly extended in commercial simulators used to train pilots, only with much more immersive (more screens outside the cockpit windows) experience complete with hydrolics simulating many (some) of the seat-of-the-pants feel when flying. Simulators have a long history going back to 1929 with the first Link Trainer prototype, created as a way to teach pilots how to fly on instruments. The first commercial model was sold in 1934 for $3,500 after the Army Air Corps lost 12 pilots flying mail under instrument flying conditions.

The theater space with rows of seats and aisles on the sides and middle is so common we seldom realize where it comes from or what the architecture dictates. We used to film the sohrabi festival each year at UMKC in an event room in the student union with flat spaces, a stage at one end, folding chairs for people to watch and food and other tables around the periphery. People would wander back and forth between seats and tables, talking and meeting as performers were on stage. When the university built a new student union they included a small lecture stage and theater combination with raked seats in a narrow space leading to a stage. That year's sohrabi was held in that theater space. Right away people were walking up and down the aisles as they were used to, talking to friends, but there were no food tables, and Nicole and I kept hearing calls to be quiet or sit down. We finally realized that the architecture of the space dictated much of the expected behavior in that space. The sohrabi was like a town-square festival and a formal theater space really didn't work well. The next year sohrabi was held in a large flat event room at the top of the union.

It also made us realize that even these formal settings were not originally used as we use them now, nor are the rules of decorum. Operas are still very long but then everyone knew the plot. More than that often they were they to see people they knew so they would slip in and out of the formal space. They had no radio or television or any other entertainment. So they were not there just to show up, sit for three hours, applaud, exit and go home. They were there for more social reasons.

And this. Not really Virtual Reality but I think you can see how it parallels. Then there is what I call "The Expanse," for which there is no camera in the world capable of creating the feeling of space and power in the landscape and sky as simply standing outside in the middle of it, on the prairie, letting it seep into your bones. Even in your car, driving west on I-70 you feel the power of sheer space. When I first traveled from Kansas City to Hays is when I was reminded of the power of sheer space. It was a replay of moving back to the midwest in 1976 after living in upstate New York with its tiny towns (Mayberry would be large in comparison) and with dots on the map being 5 or 6 miles instead of the 50 miles in the midwest. Back then I realized I had forgotten the physical sense of size. In going to Hays that first trip I realized that again I had gotten used to a closer space, now I again felt the draw of the openness.

Partnering With our Brains

Selecting by our Brains

Our brains select a little at a time, building it into a full-picture internal understanding. A little at a time is all the brain can take in at once. Think of it like building a ship in a bottle. All the materials are assembled, folded, put through the neck of the bottle, then expanded. A fully-expanded model would be broken if you tried to shove the model ship through the neck. Similarly, any attempt to push more information at us beyond what our brain's "bottleneck" can let through is simply thrown away (not selected) by our brains. If too much is shoved our way the brain will just shut it out and may close down for most input. Physical sensors get overwhelmed and shutdown, essentially taking cover.

For example music played too loud is mostly lost as the pressure waves slam into our audio sensors ("hair cells") like a wrecking ball, losing the delicate full range our ears are capable of hearing, even to the point of destroying those sensor hairs causing physical loss of hearing. Too much sound, of any kind, such as a confusion of sound, and our listening brain start shutting down, selecting only some items. For sound too loud, as with any wrecking ball, the destroyed hair cells for each person and event are never predictable wiping out random segments of the frequency spectrum. That means that merely turning up the volume on a hearing aid won't bring comprehension. Generally the high-volume audio "wrecking ball" destroys hearing a little at a time, sneaking in unnoticed.

Matching to our Brains

Again, in terms of getting it backward. For decades film and then video students were taught that our perception of motion in a motion picture was because of the image sequences and "persistence of vision." That is because for decades we asked the wrong question. We should have simply asked how we perceive motion, not how movies created a sense of motion. Cart before horse. The technical device, again, can't force the brain to do something it isn't designed to do. We perceive motion in film and video when the frame rate matches the "sampling" mechanism in the brain. The images below help to illustrate this.


See a fuller treatment here (this is the "Sense of Motion" entry in the top left menu panel):
http://www.mikestrongphoto.com/CV_Galleries/LessonExamples/MultiMedia/ApparentMotion.htm

Here Eric Sobbe is shown in studio rehearsal in 2010 for the Chinese part in The Nutcracker for the American Youth Ballet, as a professional dancer working with the kids. I shot Eric at the top of each of four sets of two leaps. In each set of two leaps he tucks his legs underneath, then he hits the floor and leaps again, this time with a split jump. There are four sets of these, so eight in all. For this example animation I used one of the sets, one tuck and one split. Then I put them together in an animated GIF and adjusted times.

At no point does Eric move from the split to the tuck directly or the tuck to the split directly. He always lands after each and then jumps again. So there are no shots showing anything moving between tuck and split, nor could there be. However, in looking at the GIF, with only these two images, Eric appears to float in the air moving from tuck to split and back again and even seems to have intermediate positions (mostly perceived in a larger version). Of course there are none in the original source images. And, because there are no images of any tweened positions there can't be any kind of "persistence of vision" (a theory proved all the way back in 1911 by Max Wertheimer - which means film school have been teaching this all wrong for decades - what I call "persistence of error").

The point of this is to reinforce the concept that anything we do with technology needs to fit in with how the brain works if we want that technology which transfers information. The effect above doesn't happen if you get the timing wrong. Most applications of "multimedia" simply shove technology out the door, into our faces, with little if any consideration to whether any message will be absorbed or any effects will be fully appreciated. As a result a lot of hard work is just thrown away.

The Sacred 24 - The Real Origin of 24 fps

What I am about to say is true. It is also heresy in almost any "cinema" discussions and will generate flaming online. Here is an example of putting a technical feature as the defined specification for a particular kind of experience. It is based not on the actual history but on an attempt to rationalize current practices as the result of declared near-sacred decisions in the past. I assure you there is a history and it doesn't match any claims for esthetics which go into believing that 24 frames per second is the only biblicly superior frame rate. That is the entire problem with elevating 24 frames per second as the standard which defines film, and especially video, as "cinema." Time and again we are given choices of just video or shooting in "cinema" and usually the only difference is in frames per second. The real story is a typical type of back-of-the-envelope practical engineering decision.

In 1927 the competition was to be first with a commercial sound-film technology. Warner was developing Vitaphone, film synched with records for sound. Fox was developing Movietone, sound on film, film printed with an optical sound track on the side. At the time there was only one commercial film gage, 35mm and one frame ratio, 4 to 3. No one counted film in frames, only in feet per minute. In 1927 Stanley Watkins was chief engineer with Western Electric and working with Warners. He got together with Warner's chief projectionist to go over standard film speeds already in use. Silent film was shot at about 60 feet per minute though it was projected at varous speeds from 60 fpm to 90 or 100+ feet per minute, because there was no sound to distort. The better houses ran films at the original speed while lesser houses ran it faster in order to get more showings per day. So Watkins decided to compromise, at the round number of 90 feet per minute.

At that rate the number of frames per second came to 24 fps, but no one needed to care about frames per second until working at other film widths, such as 16mm or 8mm. Watkins, in 1961, stated that if they had really done it right they might have researched for 6 months or so and come up with a better rate. Fox was doing its research and was considering 85 feet per minute. If Fox had won that race we would have been running cinema at 22-2/3 frames per second instead of 24 fps (remember, they were thinking in feet, not in frames).

By 1931 the sound-on-film system replaced the sound-on-disk system because it was so much handier to fix when film broke even if the audio was not quite as good. With sound on disk, if the film broke you had to replace it with the same number of blank frames otherwise you would wind up out of synch. With the sound track on film, the most a break could cost you would be a seconds worth of out-of-synch sound. By 1931 the Fox system was designed to run at the same 90 feet per minute rate in order to fit in with the existing system established by Warner. So in essence pure chance and a casual compromise set the speed of film travel and easier-handling brought in the sound track on film method.

One of the arguments I often heard, and, sorry to say, repeated in good faith, was that 24 fps was a cost compromise between bad sound at 16 fps (silent shooting speed) and something faster which would mean more film so more cost. But what I, and seemingly everyone else, didn't pay attention to was that sound quality for the Warner system was independent of the film speed because the sound came from a record which played at 33 1/3 rpm, another standard spec [on this page below], though not until 1948. So, using the audio-on-disk system, the film's frame rate in terms of frames per second didn't affect sound quality in any way.

There was never a grand aesthetic vision determining the frame rates. The original decision wasn't even about frames per second but about feet per minute. Note that we don't talk about "frameage" but we do talk about "footage" as a measure of shooting times, even for digital files which are not measured in feet. The "esthetics" of 24 frames per second "cinema" was an artifact of a practical engineering decision in the moment. We have far better cameras today but insist on remaining in 1927. The people in 1927 were not trying to stay in that year. If they had more advance tech at the time they would have set it up.

For that matter, if the status of film in 1927 was the absolute gold aesthetic standard then not only the frames-per-second rate but the film proportions (aspect ratio, width to height) of 4:3, established in 1892 by Thomas Edison, should have been held to. In a sense, it was. Television had a 4:3 format, mainly because that was what people expected, again a practicle decision. Once television was established and expanding in the early 1950's, movie people saw a potential rival and invented "widescreen" for projection in theaters. Like good practical engineers they figured out a way to use the existing equipment with a small modification, the lens. They shot with "anamorphic" lenses which took in a wide horizontal angle and compressed it to fit in a standard 4:3 frame. Then, on projection, they used an anamorphic projection lens to spread the 4:3 to widescreen, with various ratios for a while but eventually standardized at 2.35:1. An aspect ratio of 1.85:1 was introduced in 1953. This was almost the same as the current 16:9 used in today's television.

In a similar practical way the 25 frames per second television rate in Europe and the 30 frames per second rate in the US were decided for the most practical reason. They needed to have a standard clock to trigger frames on television cameras. The line current (50 hertz in Europe and 60 hertz in America) provided a great timer to synchronize video frames. And, not for esthetic reasons exactly, but because of limitations of image retention on a cathode ray tube (the phospors died out too soon), each frame was divided into two fields, overlapping each other, or in other words 50 fields in Europe and 60 fields in America per second.

Only in the last couple of decades, as video cameras and sensors exceed what film can do, has the "cinema" term been used to differentiate "mere" video from the more snobbish "cinema," usually just video at 24 frames per second. Yet the cameras are more and more interchangeable and the highest level video (as cinema) cameras, always shooting at 24 frames per second (of course) are used like social classes to claim top rank even though many of the lowest ranking video cameras are not only better than many or most of the old film cameras but are even used for cinema, such as three smart phones used for the documentary "For Sama" about a refugee couple and their baby, as they "filmed" on cameras their own flight from death threats. It is cinematic, regardless of frame rate or camera, won 59 international awards and had 40 other nominations at this writing (https://www.imdb.com/title/tt9617456/awards).

Then My Eyes Crossed, Several Times Over

Not long after writing this I saw an email from Videomaker magazine. They were featuring an article titled "How to Make a Movie" by Sean Berry. I've done documentaries and a movie and more than enough productions, hundreds, at least, of long and medium works, but thought I would look at the article anyway. To be honest the magazine hasn't had much for me for a couple decades. In reading, I saw a section of the article which repeated the same trope I've just been talking about. Just to reinforce the concept of persistence of error, here is a cut and paste from the article
(https://www.videomaker.com/how-to/shooting/how-to-make-a-movie-everything-you-need-to-know/):

How to make a movie: production

Camera settings

Typically, when shooting a film, you should aim for shooting at 24 frames per second. Why? Almost every film is shot at 24 fps, so your film will automatically look more cinematic. However, if you shoot a bit higher, that is still okay. Most video cameras are automatically set to 30 fps, but you can likely change the settings to 24 fps. Additionally, if you want to sprinkle in some slow-motion shots in your film, shoot at higher frame rates like 60 fps and 120 fps and later slow it down to 24 fps in post. Slow-motion almost always makes footage more cinematic when. Just don’t overdo it.

Again, 24 is some mysterious and sacred number to these promoters of 24. In this case the picture of the author looks pretty young but I also hear this from people my age. I have to repeat, there was NEVER any aesthetic-look consideration or research in the decision to use the film speed which is now seen as 24 frames per second, and was never about frames but about footage (90 feet per minute) and that was to match the projection rates being used in 1927 in cinema houses for silent-film showings.

It is a stupid place (there I've said the "s" word) to get hung up. Picture quality in terms of resolution, tonal range, color balance, framing are the tools that go into an image whether film, video, digital stills, oil paint, watercolor, charcoal, frescos or a tapestry. And giving the viewer time to register what is happening and who someone is are crucial to establishing a connection with the work whether it is a movie, a radio essay, or anything else in which you are telling a story.

If 24 frames per second makes something cinematic then all the silent movies shot at 60 feet per minute(which works out to 16 frames per second) were what? Just waiting for cinema to come along some day? Nor does it take massive cameras.

The multiple award-winning "For Sama" is (copy and paste from the film's website) "... a feature documentary ... of a 26-year old female Syrian filmmaker, Waad al-Kateab, who filmed her life in rebel-held" Syria "as she falls in love, gets married and gives birth to Sama." Her camera, a DSLR, shooting 30 fps video, is the same tool available to almost anyone.

And "Midnight Traveler" was shot on three Samsung smart phones, again at 30 fps. This is about Afghan film-maker Hassan Fazili and his family running from the Taliban who had a death order out for him. Going to Tajikistan and Hungary and, after the film, to Germany, and still uncertain status. Editing and post production in Premiere give the "film" its look.

For that matter a lot of film, mostly for television, was shot at 30 frames per second. Two movies by Mike Todd, first "Oklahoma" and then "Around the World in 80 Days" were shot in two frame rates, 24 and 30. In "Oklahoma" each scene was shot twice, once with 24 and once with 30 fps. The next year with Around the World in 80 Days he shot each scene once, with both cameras strapped together, running at the same time. Mike Todd wanted a steadier picture and better image resolution (note, those are both aesthetic reasons). Mike Todd did get steadier images and better resolution but he didn't get theaters to change their equipment.

Cheers and Seinfeld are two shows shot on film at 30 fps that come quickly to mind. I haven't been able to find a full list of shows. If you are shooting film for television, which is broadcast at 30 fps, why would you shoot at a different rate from the target rate? It you shoot at 24 frames per second you will have to either repeat every 4th frame or run a pull down scheme (such as 3:2 pulldown) to borrow fields from one frame and mix with the adjacent frame to flesh out the number of frames needed. Pull down methods have ghosting artifacts. You could repeat a frame but then you have to stop the film for that repeated frame and also maintain an very loose sound loop in order to keep the sound running at a steady rate over the sound head for the audio transfer. The best way, which was used on many television shows, is to shoot at the end-usage target frame rate of 30 fps.

Frames created from 24 fps video using pull down to generate another frame for playback at 30 fps. A field from one frame is joined with a field from the next frame, producing the "ghosting" artifact. Single frames of the same video shown at the 24 fps rate it was shot at. This is also what you see if, out of each four frames, you get 5 frames by copying one of those frames twice.

If your editor allows it, you can take a 24 fps file and duplicate every 4th frame for playback at 30 fps. You will have a hard time noticing the expected jerkiness. US television runs at 30 fps (actually 29.97 fps) so why shoot at a different frame rate?


Here is my illustration of two "pulldown" schemas, 3:2 and 6:4. Remember, every frame of video is comprised of two fields.

In a 3:2 pulldown (shown at left) every 4 frames shot at 24 fps are converted to 5 frames at 30 fps by grafting fields.
30-fps frame 1 gets all of 24-fps frame 1. (with 1 frame used next)
30-fps frame 2 gets field 3 from 24-fps frame 1and field 4 from 24-fps frame 2.
30-fps frame 3 gets field 5 from 24-fps frame 2 and field 6 from 24-fps frame 3.
30-fps frame 4 gets all of 24-fps frame 3. (with one frame used previously)
30-fps frame 5 gets all of 24-fps frame 4.
The "ghosting" effect, which you see above in the top left two pictures, shows the mix of fields from two adjacent frames.

To derive 30 fps without pulldown in Vegas Pro, my preferred editor since 1999, you need to right click on the clip ("event" in Vegas terminology) and choose "Properties" from the popup menu. The in the "Video Event" tab of the dialog choose "Disable resample". Leave both project and resample rates at 1.000.
This way, from every 4 frames shot at 24 fps, one frame is copied twice, giving you 5 frames and eliminating the ghosting artifact.
Even though you might expect to see judder, you will be hard pressed to detect any judder from this method.

 

Storage, Archiving and Access

And first in the line of how to store and deliver all this material are the means to store, distribute and archive the recorded results and the recordings which are part of a presentation or just a record. The software and the hardware. I haven't know a time when the storage media or hardware and most file formats have not changed. In terms of the machines, many old devices can no longer operate, or cannot read the old files for various reasons. Even when drive belts can be replaced often capstans are flattened and can't be fixed or replaced. And when the entire machine is workable often the media are not readable, either through "digital rot" or lack of correctable check sums or because the recording layer is flaking off the substrate or the file format is not recognized by existed software.

Sometimes the machine works but there is nothing remaining which will read the files or play the film (except for increasingly hard to find specialists with still-running old equipment). Often, especially in video, the worst formats are professional formats, which are always changing in "bleeding edge" fashion. The amateur or prosumer formats seem to last longest and have the best chance of being read by machines and by software over time. Video/movie files stored for archive need to be visited every few years to 1) make sure the hard drives still function [they have limited lives, but you are usually not told that], 2) to copy the old files to new disks to keep the storage media current as well as provide duplication/redundant safety copies and 3) copy and convert a file format which may be going extinct to the newest format expected to last a while.

A quick, off-the-cuff, longevity assessment ( where longevity == reliability )

Gone, hard to find, difficult or impossible to read: Laserdiscs (the "wave of the future"), 17-inch LPs (for radio stations), U-Matic, 8-mm film, Sony Beta, daguerretypes, calotypes, Hillotypes, collodion, film, 8-mm video tape, Ampex 1-inch video tape (first video tape), data tapes, punch tape, punch cards,

Still here, coming to an end, all but gone: mp3 devices, DVD, CD, Bluray, (players and burners are being reduced, Bluray carried more but is last wave of optical disks), VHS (video home system), streaming from individual websites (especially old formats like RealMedia), direct-to-dvd recorders for VHS-DVD and on-air-recording, RGB/DVI cables or ports.

Moving in: Streaming services which own the streaming and which take a cut and or ownership and are starting to own the means of production (the tools, programs to edit and render) some in "the Cloud" and some on your "own" machine as well as becoming production studios in the new mold of the old movie companies and networks (such as Amazon, Netflix, HULU, YouTube, Crackle, Sling), HDMI cables (because they have HDCP [digital copyright connector] unlike the DVI cables which have the same wires but no HDCP provision).

Also, live streaming services and apps are showing up everywhere from Facebook Live, Twitch, YouTube, Periscope, Vimeo Livestream, Ustream, Meerkat, OBS, Raptr, vMix, XSplit and others. Also hardware from BlackMagic and El Gato.

Part of archiving media requires extensive database cataloging. I have yet to meet a good catalog commercially. They almost all have a single-machine point of view or a "cloud" service point of view. Some years ago I wrote my own cataloging program and use it to catalog all working and new archive disks. My cataloging database keeps a record of the files, paths, descriptions and the disks those files are on. I label all my disks with a name for the disk and a physical label at the end of the disk. The working disks cycle through as a working archive in shelf storage. Some clients' shows are also archived on (copied to) disks with files specifically for each client.

Disclaimer

If I haven't made it clear by now I should do a personal disclaimer to let you know where I am coming from. I've always been an early adapter, often on the "bleeding edge." After enough "bleeding" I tend to wait a little longer deciding what to adapt and whether I do. I'm also from a time when small and efficient code was a default requirement. I still want to see that today.

In most cases we do not need "smart technologies" but smart applications of old technologies. To get information across, any method needs to work with the brain's ability to absorb information. Any "smart" technology needs constant updating and replacement at considerable expense in both equipment, personnel and time to implement.

I was a science-oriented kid. A chemistry kit, A.C. Gilbert Erector Set, microscope, 3-inch reflector telescope, electronic projects, model airplanes and ships and shelves of books formed my world.

I've always been a technologist since I was a small boy, with microscopes, chemistry kits, erector sets, telescopes, and building model airplanes, ships and crystal radios from scratch. I was in a tech position in the Air Force (geodetics) and I've been a programmer since writing my first program in Fortan IV in the fall of 1966 on an IBM Systems 360.

My Uncle Bud (high school education only) was the first in his family to bring mechanization to the farm, changing from mules to tractors, from wood stove and lanterns to electricity (which was quite a cooking adjustment for my Aunt Lill) though their son and daughter graduated from ag school and are on the farm. I still remember their old party-line crank phone on the wall of their kitchen.

My maternal grandfather was one of the first dentists in the country to use dental x-rays and in 1947 was a recipient of the Pierre Fouchard award for work contributing to facial reconstruction for war injuries. His father, whose parents brought him over from Ireland in 1848 as an infant in the last large wave, went armed against death threats for his part in starting a school for blacks in Maysville, Kentucky, somewhere around 1900, I never knew the details. Both my mother and her mother were school teachers. My Uncle Frank Devine was a developer in the 1950's of what was called "Programmed Learning." It was part of his PhD in ed psych.

Function and Usability Determine Technology

Blending before it was called "Blended"

Some years ago I started a new course, with Nicole English, for the UMKC conservatory's dance division to fulfill their technology academic requirement. This course was to teach 3D animation to dancers by having them use a program called "Danceforms" to "notate" their dance on computer, producing a video showing a piece they choreographed. It was a 3-hour face-to-face class on Friday afternoons in a computer lab. But often the dancers, who were in a performance program, were in rehearsals on Friday afternoons (their courses of study were all geared toward performance meaning they were required to be in a lot of shows).

The level of tech usage ranged widely, but even the dancer who declared, "Computers just don't like me" did well. Her strength was her understanding of dance. As features of the program were introduced and demostrated for her, she could apply those features to what she already knew. All the dancers could be seen going through various motions such as hand movements, comparing those motions to the palette to apply to key frames. In that way they could understand the program to produce an animation of their own choreography.

So, to accomodate their various performance schedules, we set up a course which
1) supported all face-to-face resource needs and calendar needs,
2) could be used as an online reference and for those installing the animation program on their own computers could be online,
3) added lab times elsewhere in the week where they could get help from us and
4) gave them space to work with and assist each other (dancers are very inter-supportive),
5) provide a library various media files, video and music, for use with animations, for anyone needing them.

It wasn't until a few years later that I began hearing the term "blended" classroom. For me, what we did was just designing the class to reach the students where they were. Even today I don't think of it as a "blended" class, just a class which met needs where they were. To me, the "blended" category seems to limit thinking about course design at the same time that it gives us a new category to work with. In writing the course I just did what was needed to match the students' needs and availablities and to get across the lesson projects.

Here is a URL to an archive of the "Dance Tech" animation course on my resume site.
I've modified the course to work without needing a server, substituting some javascript (my original code), so that I can use it on a USB or CD/DVD data disk as a resume handout.
http://www.mikestrongphoto.com/CV_Galleries/LessonExamples/Dance%20Tech%20Animation/Default.htm

Ed Tech, a personal history

Ed tech, as it is now called, started for me some 60+ years ago when my uncle, Frank Devine, was getting his ed-psych post-grad degree in the 1950's. He was one of the early developers of "programmed learning." He was good at it. "Programmed learning" done well is effective but far more difficult than it looks. Just the same, within a few years commercial publishers jumped on the "programmed learning" label, turning out their old material with standard chapters and questions, no different than their existing text books, but with the new label (new wine skins, old wine, so to speak). "Programmed learning" was a publisher's buzzword for several years. Before long the reputation of programmed learning was all but destroyed. The buzzword itself faded into the background.

I've watched a parallel development with web-tech learning software. I was an early coder starting in 1999 creating code for a new fully online degree program (BIT - Bachelor of Information Technology). I still have a learning framework developed in javascript and based on my uncle's frames. It still works though I've had no occassion to deploy it in a long time. Others were developing learning software. There was a lot of hype and there were a lot of claims. There remain a lot of claims, usually not backed up with studies. Just rich with the same smell of snake oil that I've seen in software as long as I've been programming.

I've also see a lot of learning "platforms" such as those in BlackBoard which are today's version of those commercial publishers in the 1960's who jumped on programmed learning as a sales buzzword. BlackBoard is really just another server system with nothing so special in terms of learning software and the quiz system is nothing special, equivalent to those commercial publishers in the 1960's and their end-of-chapter quizzes. So far they are still selling like gangbusters and they are not cheap.

Rolling Your Own vs Packaged-Ed Programs - i.e. local vs outside

My biases, up front, are local. I have a very strong bias for using locally-produced products or services. (1) Partly that is because I grew up in a small business household, a glass service. (2) Partly because if we don't support each other face to face then how do we expect our neighbors to support us? (3) And, partly because in my experience, locally produced goods and services are targeted better for local needs just because we generally know our own needs better than anyone from outside. And, the last, in a seeming paradox, is because we are also part of a large community of knowledge exchange which is part of traveling and living in larger cities. The population of New York City, for example, is 37% foreign born. Similar for other large world cities. You can add a fourth (4) the same local knowledge allows quick and close maintenance, from a "hand-on-the-plow" feeling.

And that 37% foreign born clearly does not count all the people all of us know who come from small towns and go to NYC either permanently or who return after a few years or even decades. That indicates that the number of people in NYC born and bred there is probably a minority. This place becomes a rich mixing place where information and practices are exchanged as a matter of living and working together. In the meantime people have always had some sort of interchange with distant locations. In the last 30 years, with the internet and the World Wide Web (the linking system using the internet) we have all become a massive global city rubbing elbows with each other more closely than ever. The HTML code, by itself is instantly available to any coder who right-clicks on a browser window to get a pop up menu with the "View Source Code" option. Indeed, we often enough, can work with each other across vast physical distances, distances reduced to the reach of our arms to a keyboard and mouse.

Regarding the first two points, even my cameras, which can't be produced locally, nonetheless can be purchased either online or locally. I paid an extra $300.30 for my Nikon D850 body, in Kansas sales tax, and waited for weeks for the body to come in, because I wanted to support my local store, the only real camera store left in the Kansas City metro area, Overland Photo. When I travel between Kansas City, Missouri and Hays, Kansas I try to see whether I can make Topeka in time to get to Wolfe's Camera at 7th and Kansas Ave. And, when I really can't get something locally, either when I go in to the store or ordering through them, I will go online, usually New York City for B&H or Adorama.

On the third point, in programming, I found that the programmer who works the job for which the program is written, does a much better job in making a program act like a real extension of the worker's needs. The interface, depending on the skills of the programmer, might not always be as slick as the outsourced program, but it is often a better fit for the local job needs. Remember that programmers are in the front lines of people working directly across the globe with each other, no matter how tiny or how massive their physical location. Still, there is always something to be said for true face-to-face work.

By going outside, an organization is often passing up highly skilled and capable resources close by. The irony is that the outside programmers also are local - to the place they live. And programming something to a set of descriptions based on meetings, and project papers and notes and questions is not really enough for fine tuning. There is too much distance, small as it seems, between the programmer and the user of the program. The two need to work together, and ideally, the programmer should 1) also do the work of the user and 2) oversee and watch the user to determine whether the program is really understood by the user and whether the programmer really understands the user. That's a one-on-one kind of task. It has to be done in person, at "the desk." Meeting in board rooms, even next door to "the desks" is a wee bit distant.

In 1983 as a meeting coordinator programming PC's for a management office I needed to develop a mailing list of thousands using temporary workers. I found out very quickly that my interface needed adjustments as I watched the data entry temps type in addresses and other information for our database. Sometimes the operator wouldn't see items on the screen, would misunderstand prompts or might have fields which didn't fit the information and so invented, on the fly, ways of shoehorning the information into fields as they were. In each case, I had two choices, more training or a better program interface. In almost all cases I realized the responsibility and best answer was for me to change the program to fit the needs and understandings of the operator. I wound up with a much more useful program that way.

Later, in the early 2000's UMKC changed its system software for classes and enrollment from UMKC programmers to PeopleSoft. I remember meetings in 1999 before PeopleSoft was purchased in which I argued for retaining the local programmers (not me - I was working classroom software development - and, side note, didn't care for BlackBoard then, or now). The change to PeopleSoft was made regardless, millions were spent and we never, ever got back the simple utililty of much of the home grown software we had been using. Further, the talent migrated elsewhere.

A bit later I was hired to oversee new web sales software at American Crane and to build a web interface for the company leading to the new software. They had hired a large firm out of Dallas to make a fully new version of the sales software that my boss had originally written. The firm sold to wholesalers only and instead of the typical shopping cart interface a row and column display, like a database's spreadsheet view, was sent, with hundreds, or even a few thousand lines (records) at a time. It looked a bit homespun but it worked well. Even so, the company wanted to update the site and the best conventional wisdom, as it often does, said to hire outside experts because this is all the outside experts did. It certainly seemed like the right decision and is a common choice. It also cost a good deal.

Once done, the outsourced software worked nicely on local machines in demos in the office but when it went live we had page-size problems. Many of our customers were overseas in South Africa and in Bengladesh and so forth. They had much slower lines and the new software created HTML pages with full tags. It was a huge page load each time and the customers said they couldn't use our site to buy our parts because the pages kept timing out for them. So, I did an interim fix by reworking the scripting software and instead of creating a fully-formed HTML page where the internal structure added a ton of file size, I sent the page information in a delimited data form, very compressed, and then expanded it to full HTML on the client side using javascript.

The only catch was how much memory headroom each client had on their particular machine in their particular web browser. Nonetheless, this solved the problem of long load times and made those remote locations snap again. (I should note, they sold third-party Caterpillar parts to contractors across the globe. Some locations were very remote.

The first thing I should note (this is important) is that no programmer was trying to do a shoddy job. There should be learning, not blame. The outside programmers worked hard and with dedication. The problems were never from lack of effort but from lack of personal "user" experience. They weren't using it themselves. They were working to specifications from descriptions and questions, generally at some distance from actual production (such as meetings, phone calls, faxes, sometimes in our office and sometimes with us traveling to Dallas). You can only get so close to the goal that way. At some point you have to be almost in the same chair at the same desk at the same time.

In the spirit of number 4 above, I'm also a proponent of putting together web sites that can be maintained and updated locally, meaning the office or outfit the site represents should have the ability to make their own modifications. I don't want someone dependent on me to change their site for them. A site doesn't have to be fancy or clever. No one comes to a website to be impressed by your cleverness. People come to a website for information. The clearer and simpler the information is to access the better and more effective. The site is providing an information service, not a showplace for a web designer's theatrics. Nor should a website give a trendy and fuzzy impression for your organization. That doesn't mean it can't look good. Local talent is not to be underestimated.

One more example of local expertise. A personal story from the Air Force. It is one of those city slicker gets out-slicked by the hicks stories. One of the many teams I was on the Air Force was for a job in Greenbank, West Virginia at the National Radio Astronomy Observatory (Green Bank Observatory). Probably 1970 or 19'71. I didn't observe this directly but one of the other team members related it to me. One of our team, known for boasting had gone into the nearby tiny town to shoot pool. He figured he could take them because he was from New York.

We'll call him Gary. The guy who told me the story was the one who spotted Gary coming in the front entrance of the observatory very early that morning, shorn of a few personal items. Apparently Gary had bet the locals the shirt off his back, and his socks. He returned wearing a jacket, pants and shoes. For the rest of our job there Gary stayed at the observatory when the guys headed in to town. They were asked when they were going to bring the fish back. The locals, of course, knew every single inch of surface and every break point on their pool tables. Local expertise.

Rolling Your Own Ed Tech

This is not as scary as it sounds. Chances are you are already doing a version of this. The ed-tech software we see can have bells and whistles but those sometimes change with each update. In essence, the promise of artificial intelligence interfaces which will allow teachers to replicate themselves as they place their courses online is pretty limited. That's me being kind. For the most part, teachers are placing their material on a web server using the content management system from the tech supplier when they could be doing the same using Dreamweaver on any web server. Ed-tech interfaces do allow teachers to construct quizzes but not the still-promised AI. Most is pretty basic. As I stated above, most of this reminds me of what I recall happening to "programmed learning" in the 1960's. The delicate structuring in programmed learning was not implemented when publishers began producing their "programmed learning" books and instead wound up being the same old chapters followed by a list of questions. It may have looked like an easy formula but it wasn't. Real programmed learning was costly, exacting and time consuming to produce.

Elsewhere on the page I've noted my own objections to using CMS's (content managment systems) rather than typing pages in Dreamweaver. The original idea of CMS's was that web coding was too hard to expect large numbers of people to use, so CMS's are systems which could be edited on the web, directly, by anyone. The problem for me with this argument is a deliberate blind spot. While Dreamweaver is an HTML editor, it is a WYSIWYG HTML editor (what you see is what you get - a term not much used anymore as everything is WYSIWYG, pronounced WIZZ-ee-wig). That conveniently leaves off two major factors: 1) CMS systems have their own learning curves, which are at least as steep, or steeper, than Dreamweaver's, subject to updates changing the rules, and a user-experience awkwardness in responsiveness times. 2) If you can operate a word processor, such as Word, then should be a slam dunk. Dreamweaver is simpler than Word, with fewer options and convolutions and also fewer operating changes than the usual CMS. You can use it almost like a word processor. The only real learning curve comes with understanding where your files are, source and web copies, and how and when to move your document from your own desktop to the web location.

Often, because I couldn't get what I wanted in Blackboard, including use of javascript to create my own in-page review quizzes, CSS style sheet(s) and my own menus, I would write my pages in Dreamweaver and drop them into Blackboard in the resource files and then link to those resource files from the CMS front end of Blackboard. I will have to admit, that as an old coder I do chafe at the limits I find in Blackboard's controlled structure which I find limiting. Actually I find it a bit insulting because I don't see anything Blackboard pages have that I can't already do in Dreamweaver far more fluidly. Just changing fonts and creating attributes for heds and body text is like going backward, to the very first browser code, before stylesheets, with a much more awkward interface. I remember the first time I really looked at the code created in Blackboard to control style items (fonts, size, bold, etc) and realized how much extra and redundant code was generated compared to the compact and elegant way of adding styles with CSS.

Once you have teachers writing their pages in Dreamweaver instead of Blackboard you also have pages in your own file system (your own control) rather than existing purely in some "cloud." You also now have the task of working the database which directs enrolled students to your pages, which allows teachers to know who is in the class (enrollments) and which keeps track of grading and status in the school. That is a completely separate operation from writing content. Creating interactive learning nodes, such as quizzes or step-by-step tutorials is a third area whose mechanism is separate but whose application is connected from each class exercise.

"Getting Across"

"Getting Across" is a stage term which means you have an audience which is not just watching you, but engaged with you and supporting you. Kansas City's Ronald and Lonnie McFadden have ben performing since they were kids with their father. They tap, play horn and sing. In an interview by Billie Mahoney on her show, "Dance On" (a show I shot and edited - see my 1hr 14 min sampler here: http://www.mikestrongphoto.com/CV_Galleries/VideoEmbed_DanceOn.htm), Lonnie talks about a time when the pair were performing in Japan. They hadn't tapped in years. Their audience were appreciative but the brothers felt they still needed something else to connect. So, they decided to start tapping again.

When they brought their taps to the club and started tapping, "that," said Lonnie, is when they "got across" to the Japanese audiences. Suddenly the appreciative audiences were right there with them. They got across. It wasn't the tap tech, so to speak, but their performance in tap. Here are two URLs to National Tap Dance Day performances shot and edited by me. In both cases Billie produced the shows:
1) at The GEM 2001: https://www.youtube.com/watch?v=wmyOgBGzQLQ
2) at the Uptown Arts Bar (now closed) 2014: https://www.mikestrongphoto.com/CV_Galleries/VideoEmbed_NationalTapDance2014.htm This one uses three stationary cameras I set up so I could take stills, That's me in the red shirt and blue jeans, seated and moving, DSLR to my eye. Each still picture is flashed on the video for several seconds, at the point I shot it

Back to the idea of getting across content effectively. Once upon a time, such as when I started in 1967, it was thought that writers couldn't shoot and shooters couldn't write. The beliefs about human ability went along with any limits in a belief in categories. This was at the same time that a performer who could act, sing and dance was considered a triple threat. It was also the same time that I was trained in both photography and writing. Good thing too. Too often when looking for a photographer job, it was already occupied, sometimes just hired for (i.e. The Geneva Times) or they were a radio station and didn't need a photographer (different today when radio stations have websites), but they had a writing job as a reporter. So I did both at a time when combining job tasks was unusual. Today most reporters and even photographers are usually expected to do both jobs. Call it a multi-job.

My multi-job equipment consisted of a manual typewriter (a Royal, later an IBM selectric with an OCR ball after they bought a $34,000 scanner, which would be less than $100 today, to replace a human typesetter), a package of newsprint cut down to 8.5x11-inch paper, a large pair of shears, a large can of rubber cement to paste the newsprint pages into a long roll carrying the full story, my own cameras (35mm, medium format Mamiya C3 [twin-lens reflex] and a 4x5 Crown Graphic, my own darkroom equipment, developers, enlargers and a police scanner.

We didn't have the internet then, so the transport for the story was not via email or other upload, the Geneva Times hired a local, as a courier, from the area that I covered (South part of Seneca County, New York) who worked in Geneva, New York to come by my place each morning about 5 am and pickup my stories, leaving the envelope with my stories and any negatives and/or prints I had processed at the newspaper before the courier continued on to work. News outside our areas came to us via teletype constantly.

It was on The Geneva Times (now the Fingerlakes Times) that I learned
1) Memory is re-created and modified on retrieval (not retrieved like computer data). People with initially different memories than me of a meeting we were both at would read my story in The Geneva Times the next day and agree with my account. That gave me a reputation in the area for accuracy and fairness. A local resident who was another reporter informed me of this reputation, which was good because I wasn't too sure how I was viewed, being a reporter for the local daily newspaper.
2) Deciding to make that reputation even better, I rolled out the tape recorder I had used on WGVA. I thought the audio recordings would make things easier. I found out that handwriting notes was so much better than a recorder because
[a] hand writing engaged my brain to keep summarizing what was just spoken giving me a headstart on typing out the story and
[b] by the time I was done I had several different meetings: the one I was in, the one I remembered going home, the one I heard on the tape, the one I read from the notes, the one I typed up and the one that was printed in the paper with which the readers agreed.
[c] now I could add those memories of those stories I remember is yet another version. Even with a recording, I learned, there are few fixed "facts."

When I worked on radio we would pull the stories off "the wire" at regular intervals. Usually we took time to read the stories first but once in a while it was "rip and read" which meant a cold read, risking stumbles on words not expected. The Geneva Times printed and distributed its daily (Monday though Saturday) edition via trucks. The pages were paper pages with text and pictures. And often the local radio stations would read our stories, including mine, on the air. Such was the multitude of media we had. Opening the wide pages of paper was its own, physical experience which merged into a mental experience of reading content. Today's media is also an experience concentrated in devices if not in content, from which the device continually distracts us. There is something about paper which allows immersion in the material contents, something devices do not allow.

That was the mid-1970s and the still new word "multimedia" meant a set of slide projectors, maybe a 16mm movie projector and a few lights. Mostly the workings for a stage show. The first internet services, as portals, such as Compuserve, AOL and GENIE in the 1980s was the real start of mixing media to deliver story and other narrative information. The world wide web, in the 1990's (started in 1989) really deployed multiple media in the delivery system. Essentially the web page was now the show and the meaning of "multimedia" shifted to the main purpose of web pages, providing information, rather than a stage or event show.

Narrative

Start with the idea of a story, call it journalism or entertainment or teaching or just living. The story / narrative comes first if you want to retain attention for any length of time. What do you need to get across to someone? What do you need to accomplish that task? So we have text as the most basic element, perhaps supported by images and maybe video or animations and published on electronic media. HTML pages are really the same thing as any previous pages on paper, such as magazines, books and newspapers. The "ML" in HTML stands for "markup language."

Anyone who worked before SGML code or HTML code remembers using pen or pencil to mark up a typed page or a galley proof with spelling corrections, font designations, paragraph breaks and adhering to "style books" for the publication as well as the widely used the Associated Press (AP) style book. If you've ever worked on a publication with letter press printing and type set with a Linotype (and its wonderful click-clanking sounds and molten lead container on the side) you remember lines of type, set to a line height and with thin sheets of lead used to add space between those lines of cast-lead type. It was called "leading" and you can still set "leading" electronically to increase line spacing, but with a very different method.

Whoever controls the narrative controls the world

Do not imagine for a second that social media on a large scale exists for the benefit of the people. Or that you are not being surveilled. Censoring social media, regardless of how repulsive the censored source is, means narrative control. Forget trenchcoats and shady phone tappers sneaking into building basements. Now people jump at the chance to acquire personal surveillance. How many NPR stations tell their listeners to "tell your smart speaker to tune to (their station)."

Facts, Sacred Facts and Methods

  • Validate your sources, how well you know them.
  • BS detector - a.k.a. experience and disillusionment.
  • Does the claim make sense?
  • Is there evidence presented or is it just "something" presented?
  • Does this have a familiar ring?
  • Is this instantly jumped on and honored by mainstream journalists, if so back off and look again.
  • If this is a "cannonical" narrative, look at it again, freshly. Solidly established narrative claims are often fables.
  • Look again at how we accept most government pronouncements. No checking. That is a problem for journalism and for any ideas of democracy.
  • ... and further skeptical questions ...

There are a couple of basic principals common to all questions raised by journalism, which are needed to get around the usual gaslighting:

  • Follow the Money - a search for motive as based in money. Usually this will work to direct your research. This is also where most journalism course leave it.
  • Qui Bono? - Latin for "Who Benefits?" - almost the same as above but not necessarily about money, usually about power or power relationships or ideas of who should rule and who should serve.
  • What is the oppression and what are the divisions? - This usually comes up for people in revolt, reacting to mistreatment and victimization, often built up over a long time.
  • And there is always no rationale at all, just plain mean, hateful, spite, vindictiveness, rage and (add a few yourself here).
  • Finally, understanding power vectors, George Carlin's superb analysis of power: "It's a big club and you ain't in it. You and I are not in the big club." (i.e. who is in and who is not in "the club")

A Rant and a "Leak" - personal example

“Leaks” are not well understood. They are often official dish or trial balloons or just a back-alley way of getting at rivals or all the “off the record,” “backgrounder,” and “from a high official” or “high official said,” attributions. And the general public seems to think leakers committed an official betrayal of trust or a crime. In the "main stream" narrative these leakers are such people as "Deep Throat," Daniel Ellsberg, Chelsea Manning, Katherine Gun, Julian Assange (see below), Jeffrey Sterling, and so forth. Those are real leakers, alright, and true heroes, people with ideals. They are only the most public and are in the small percentages. More often officials want to feed you their information so that you publish it as your story. Often they do a story version of double dipping by getting interviewed about the material they leaked.

In the mid-1970s, as an area reporter for The Geneva Times (now the Fingerlakes Times) I was covering a county assembly meeting for a vacationing reporter when the republican county attorney got on the podium to point me out to the assemblage, to excoriate me and my reporting. It was a surprise but didn't bother me. I figured John saw a chance to blast away at me in front of his buddies and took it.

The next Tuesday I got a phone call from his secretary asking me to come by. They had something for me. I assumed it was some sort of complaint. Instead, John’s secretary handed me a large, sealed manila envelope. It was filled with documents aimed at discrediting a political rival, a Democratic judge, who had been elected a year or year and a half prior, in 1974, a campaign in which Republicans smeared him. It was a pretty dirty campaign for those times. Mild compared to 2020. Clearly they weren't giving up.

In openly tearing into me at the county assembly meeting, basically a bit of theater, John gave himself cover to avoid being suspected of being the source for the information he handed me a couple of days later. I headed for my editor and we perused the material. It was Chris’s beat and I was just filling in, so we handed it to Chris for his story material, when he returned from vacation. I don't remember what happened to it after.

So that was how I learned the real nature of many "leaks" and "authoritative sources." Add terms such as "off the record," "backgrounder," "access," "from a high official" and so forth.Cozy-sounding terms which really mean the reporter is getting government or office-holder propaganda (agenda) passed on as verified, substantial information when it is really just a handout and should be checked with real shoe leather. It is a back-alley route to controlling the narrative by "infowashing" it through reporters who pass it on as a "scoop" (a term we used to use but I haven't seen it in years).

Reasons for classifying information (hint: most are excuses to avoid oversight)

Lima Site 85 in Laos, a "Combat Skyspot" operation (LS-85) started November 1967, massacred March 11, 1968, declassified in 1998.
Links: The CIA link is longer and more anodyne, but has added detail about previous operations at the location.
1 - https://en.wikipedia.org/wiki/Battle_of_Lima_Site_85
2 - https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/csi-studies/studies/95unclass/Linder.html
3 - https://en.wikipedia.org/wiki/Lima_Site_85

The very name "Skyspot" was secret when I was in the Air Force. Skyspot directed munitions and bombs over North Vietnam. LS-85 was in a far northeast corner of Laos, poking directly into North Vietnam. It lasted from November 1967 through March 10/11, 1968, when it was overrun by the North Vietnamese. The location was chosen for installation of "command guidance" radar for bombing North Vietnam, because it was already a CIA base with Hmoung support troops and because it was very close to North Vietnam, too close. It was also a darn stupid idea. A little like putting the guns of Navarone on an island just outside London and expecting a "secret" classification to keep the Brits from seeing the guns. In this case the guidance radar was located this close to the north to increase the accuracy of the bombings.

But it was also darn obvious. A lot of people were killed. Had this not been secret it is possible (though hardly certain) that someone with common sense would have called a halt. CMSgt Richard Etchberger was killed on evacuation by an AK-47 round through the bottom of the chopper, into his back. He died in the air on the helicopter, on the way back to base. His family was told he was killed in a helicopter accident and they didn't learn the true nature for 13 years. It would be later yet, in 2010 that Dick Etchberger was awarded (posthumously) the Medal of Honor. I'm sure his family would have prefered him alive, without the medal.

I learned about it earlier because I wound up in the same squadron as some of the people on that job. I didn't enlist until 7 November 1968, a year later. I was trained as a geodetic computer, one of the occupations used for Skyspot ballistic calculations for bomb drops, and later, cross-trained as a geodetic surveyor, later working with some of the surveyors who initially set up LS-85 in November 1967 and who returned to our squadron before the site started to be hammered by the North Vietnamese.

This was also secret because a lot of rules were being broken, including a prohibition on troops in Laos, so our guys went to the site in civvies with fake ID claiming they were working for a civilian company. The CIA had been running this operation in violation of Laotion diplomatic restrictions for several years. So a lot of people were there who were never there. Because it was classified it was wide open for abuse and for bad planning without oversite. Classification as secret is usually justified as protecting lives. By avoiding oversight or criticism, that secrecy cost lives.

Something that didn't cost lives, but exposed crimes took lives, was wikileaks. See below.

The Exodus - using comparative literary analysis to assess an author's accuracy

As long as we, as journalists, need to question fixed narratives we may as well check out a big one, a sacred one, to see why almost every check to verify "The Exodus" parallels the same lack of rigor for most journalism. To look at this story we need to understand the nature of analyzing a narrative. The Exodus has no provable historicity. It is, however, a major founding myth and a great deal of history and current political and military policies originate with the claims of this story. The story itself is not totally unique and there are similar stories with other groups. Often, stories were "shared" across cultures.

Two main points come into play and then a lot of other details are worth going after:
1 - When the writer is fuzzy on details, or wrong, it means the writer doesn't know the details he or she is writing about.
2 - When the writer is clear on details it means the writer knows the details, usually as contemporaneous information.

Google away for this topic. You will find uncountable numbers of web pages with the "facts" validating the Exodus. Again and again the method is to look for almost any possible way the story might have worked. They start by believing the story as given (think of claims at a press conference or press handouts) and then constructing their own details to justify the claims or asking the politicians about the story rather than checking it out independently.


Boundaries of ancient Egypt's "New Kingdom" period included Sinai and Canaan,
all the way up the eastern coast,
the entire "escape" route.

 

 

 

 

 


Fuzzy stuff and wrong stuff

  • The name of Pharoah is never given so various writers and movie script writers have made up their own cosmology, usually naming Rameses, probably because the city of Pi-Rameses is mentioned. But we are given the name of the Babylonian ruler, someone known at the time of writing.
  • We are never told the time period but the assumptions of "Rameses" puts us in the "New Kingdom" period from around 1550 BCE to 1077 BCE
  • The New Kingdom: https://en.wikipedia.org/wiki/New_Kingdom_of_Egypt - note the map
  • We are told the Israelites escaped Egypt into the Sinai, hang around there 40 years and head for Canaan, as if these two locations are outside Egypt or its sphere of contro.
    • The could not have left Egypt at this time because the boundaries or Egypt in the New Kingdom extended across the Sinai and totally covered Canaan.
      • This would be like writing that you escaped the United States by going from New Orleans to Nashville, hanging around the clubs for a while, then moving on to Cleveland where god told you to conquer (a.k.a., devour) Cleveland, killing every last person young, old, male, female and every animal. Genocide. Hardly the first biblical genocide, commanded by god and done by the "good guys."
      • Side note, the gospel song where Jericho falls is a great tune. It is also a tune celebrating the total genocide of a city, which is another detail not supported by archeology.
    • This is something the author of the story seems not to have known those boundaries and to have assumed the borders of the 6th century BCE.
    • Here is where we are reading details that are contemporaneous to the author and not-so-specific in the time period the story is set in which is far removed from the author's knowledge.
    • The author(s) of the story were actually writing centuries after this time period, starting with accounts in the 9th and 10th centuries BCE and probably expanded and codified in the 6th century BCE under Babylonian captivity and return.
    • The author(s) claimed the pyramids were built by slaves, but we now know the builders were skilled artisans, paid well and treated well.
  • Even if they did leave as a group there is no need to go through a sea
    • Why even head for the Red Sea when there have been normal trade routes used for centuries going in both directions between the Nile delta and the Sinai.
    • This would be the same route Moses takes going from Canaan into Egypt (in the story) remembering at the supposed time, this is all in Egypt.
  • And this is, after all, a story told by story tellers. Entertainment and state propaganda, in the same package.
    • You can guess this from the Zipporah at the Inn story (just a couple of lines) in which, after God sends Moses on a mission to rescue and lead God's people, God suddenly does a volte face and decides to kill Moses because of a little problem with circumcision. Really? The dude he sends in for the rescue? Did God get surprised because Moses forgot to mention something? How can god get surprised? And why so petty. I mean mutilating your penis is a requirement? Can't he just wear a mask? Get real!
    • In any case, Zipporah (Moses' Canaanite wife) saves the day but cutting off the foreskin of their son (why him, not Moses?), and Moses isn't killed by God (whew! barely saved that mission).
    • But what is that about? What is it doing there?
    • The story line, if you can really call it that, in the original, is a mess. It is all over the place, repeating some steps and pulling in plot devices seemingly at random. The story we usually read is a cleaned up script version, a bible stories version, a movie version. Go back and read it in the original and forget the movie version. Pretend you've never heard the story. It is not coherent. In truth, many many such stories are far from coherent story lines. They are more like thrown-together fragments and remnants.
    • Instead of thinking about the story, think about story telling and itinerant story tellers. Can you imagine the story teller deciding he need to wake up his audience, suddenly pulling a reversal on them. Same thing as a Tele Novela or any TV show that realizes it needs another red herring or a few more minutes to pad out the show or just something to juice the audience. Writers have been juicing their stories with dramatic turnabouts for millennia. They did it then and they are still doing it.
    • Besides, it is a similar device used in other stories of the period. Just read Herodotus. Also choppy stories. You really have to pay attention.
  • We are never told why the Canaanites are wiped out to the last person and animal by the Israelites, supposedly told to do so by God (gee, such a nice god, really?). What did they do? We don't know.
  • Basically, the story is structured to justify wiping out a people because you want their land. The narrative is an "Our people" versus any "vague, minimized, devalued people." That old 6th century BCE story would be just an old story, except it is still being used, 26 centuries after its deployment, to justify current takings.

There are a ton of other points why this story doesn't smell right in terms of common sense, war crimes, God's seeming limited ability to figure out what Moses will do before giving him commands and so forth.

However a clue is in the use of this story to justify taking over a land and wiping out the people.
If you know what the story is used for it will give you a clue about it's provenance and about whether it was constructed to support an agenda.

Hint: The Exodus and the later conquering of Jericho is a bit like the AUMF of 2001: (quoting wikipedia) "The Authorization for Use of Military Force (AUMF) (Pub. L. 107–40, 115 Stat. 224) is a joint resolution of the United States Congress which became law on September 18, 2001, authorizing the use of the United States Armed Forces against those responsible for the September 11 attacks." only the biblical version of the AUMF has been used for at least 1,600 years, sort of the original model for never ending permissions to wipe out another group of people so you can take over. The Exodus is not just an old story, it is a foundation and justification for today's Israel and genocidal behavior toward Palestinians.

The Accident, Assumptions and Eye Witness Reliability

On Seattle's I-5, demonstrators Diaz Love and Summer Taylor are hit by a car driven by Dawit Kelete

https://vimeo.com/436466767

July 4th, 2020 - A tragic accident - A white Jaguar owned and operated by Dawit Kelete ran over two demonstrators in the wee hours on I-5 southbound in Seattle. These are two clips downloaded from the web and placed together in an editor to get a detailed view. Not shown the first clip, the driver came to a stop just after that video ends. There seems to be no particular reason the video ends there and it is tempting to suspect the submitter of stopping at that point in order to not show his stop. After Kelete stops, one of the demonstrators starts banging on his car and trying to open the door. Kelete takes off, I assume, afraid of the demonstrators trying to get into his car.

The two demonstrators are white trans women and the driver is black. That needs to be noted because of the origins of the protest and the identity politics which feed into this including Diaz Love whose remarks would seem to indicate that she ("they" is her preferred pronoun, but the plural is immediately confusing) included being trans as part of the "attack" on both Love and Taylor.

Because this was a Jaguar, a luxury car, and because Kelete is the owner, I had an immediate common sense question. It is very unlikely that any luxury car owner would deliberately put so much as a scratch on their car. So that was an early starting point for me to doubt a deliberate attack. I decided to look more closely at this rather than accept the widely and all-too-quickly published claims that this was an attack. Most accounts, whether tweets or established news organiztions, were clearly eager to fit this genuinely horrific accident into their own narrative of an attack on demonstrators. I don't remember seeing any speculation that this was a genuine accident or any reasoning similar to mine which might question the quickly established narrative.

I downloaded several video clips, using two of them. I labeled frames in the video to specify braking, turning, etcetera. As I went frame-by-frame through the video I could tell when the brakes were put on and then off and on and off (from both the lights and the red glare behind the car). So he had to have been pumping the brake. It remains unclear how and where he got on the I-5. What does seem clear to me from this examination is that this is a tragic accident and not an attack.

It is clear to me he must have not expected to encounter the cars across the highway, that he tried to brake, then swerved to miss or get around the cars, then further braked and swerved to avoid the larger group of people but that sent him toward Love and Taylor and that he did stop, rather than try to run (hit and run) but was likely scared of the people approaching and trying to get at him in the car.

Index of times on the video:
NOTE, you might want to skip ahead on the video to 6 minutes, or even 7 minutes. Not much happens. I included the earlier material (which was much longer than here) because it shows the attitude of the demonstrators before the incident. They seem to have been oblivious to their possible danger. It would be far too much a stretch, not to mention an ugly accusation, to think their unpreparedness was their own fault. They had a lot of reason, as well as recent experience, to believe that the police were making sure no one got on the road. But something was open. Not explained at the time of this writing is how Kelete wound up on that stretch of highway. He, too, I guessed from the video, seemed to think he was alone on the highway with no one else to worry about.
Overall length: 9:41,11
(format: min:sec,frames)

1:20,02 - standing around (long section)
5:54,22 - standing around (just before)
7:13,02 - first alert of car approaching
7:21,02 - car swerves around parked vehicles
7:27,11 - car comes to a stop
7:37,08 - people start pounding on car, which takes off
Show again, slower:
7:53,01 - collision portion in time-stretched video
8:54,19 - Stills,Google-map illustrations

One of the witnesses stated that from the other lane he heard the car rev up before hitting Love and Taylor. The frame by frame video indicates other than this. We should note that eyewitness testimony is not that reliable and "we've" known that for many decades. The other item we should note is that pumping the brake and trying to steer hard could also result in changing engine sounds which might have left the impression of reving up, especially if the witness was primed to see this as an attack.

About Eyewitness Testimony

As an aside I should note that I have two videos of a bank robbery "committed" as a class project, with police escorts monitoring, to demonstrate eyewitness reliablity. Both videos are from Dr. Will Adam's constitutional law class at William-Jewell in Liberty, Missoury. One is from 1967 and the other from 1971 or 1972 (Will couldn't remember which when he had me do a re-edit with title comments). Even the real police, who were escorts for the projects to make sure nothing went off the rails, and who were called as witnesses in the in-class "trial" conducted by a real judge, made some glaring errors, as shown by the movies taken of the exercise. These officers had no stake in convicting anyone. It was all a class exercise but they still made errors which caused an innocent participant to be charged in the exercise. Multiple people made the same eyewitness mistakes. They were all certain until showing the film for each class exercise proved them wrong.

In the innocence project, 69% of those exonerated by DNA (252 / 367 = 0.687) were convicted largely on eyewitness testimony.
https://www.innocenceproject.org/how-eyewitness-misidentification-can-send-innocent-people-to-prison

Intelligence Sources Say ...

Check the source. Almost certainly this is state propaganda designed to attack another country and to keep the military and military suppliers in big bucks. Very, very seldom is this not tinged with self interest on the part of the agency the source belongs to. The CIA has long, long (from their start) had a pump and pull operation with journalism, both cultivating journalists who like to think they are special recipients of inside information and creating and running their own journalists.

The CIA has been actively, and covertly, overthrowing nations across the globe. Why, for example is the United States currently crippling Venezuela and then smearing Maduro and pushing Guaido? Ask yourself why Venezuela's regime is our business to determine. Here is a more complete accounting of what the US did to Guatemala in the 1950's, largely for the sake of United Fruit Company (now Chiquita) and a lot of bananas (really). https://narratively.com/the-literally-unbelievable-story-of-the-original-fake-news-network-full

One of the main purposes of propaganda, according to public relations guru Edward Bernays, who wrote the original book "Propaganda," the point was not for people to completely believe fabricated information but rather to create a climate in which people could doubt their established beliefs and entertain new claims. Smears are more effective and longer lasting than accurate facts and are capable of destabalizing true facts or at least make you less certain in defence of what you should know as facts.

For example smears of Julian Assange (wikileaks creator) as arrogant, dirty, and as rapist or that he fled Sweden are all complete lies. Yet you will repeatedly hear the must trusted mainstream media refer to Assange as having been charged with, or wanted for sexual assault charges. That was never the case. Or that a condom was presented. It was presented but did not have Julian Assanges' DNA. For that matter it had no DNA. As far as charges go he was never charged or wanted for anything (the first prosecutor who looked at this decided there was nothing to prosecute) until a second prosecutor with ties to American intelligence stepped in and made claims for extradition without actual charges and who refused the standing offer of ability to interview Assange in Britain, something she and her colleagues had done for others many times. Nor did he flee. From the first he had voluntarily gone to the police in Sweden and cooperated fully, was told he was free to go and did so. The reason certain Swedesh prosecutors wanted Assange to go to Sweden was suspect as a way to hand him over to the US. This was clear when they repeatedly refused to certify that if Assange went to Sweden he would not be snatched to the US.

The enormous efforts on the part of the US used a bogus charge to punish a journalist (Assange) and to send a message to any other journalists, anywhere in the world, regardless of which country they come from, that exposing crimes of the US, including war crimes, would be the end of your life, one way or another. Time and again the secretiveness and arrogance of the United States has been clear and yet mainstream media bow down to it (forget the hero stories about standing up for what is right) repeatedly. Again, do not look at charges and narrative claims, look at who is in charge.

Also the connections and their histories need to be noted as illustrated in this lift from wiki in a page on United Fruit:

"John Foster Dulles, who represented United Fruit while he was a law partner at Sullivan & Cromwell, who negotiated that crucial United Fruit deal with Guatemalan officials in the 1930s, was Secretary of State under Eisenhower; his brother Allen, who did legal work for the company and sat on its board of directors, was head of the CIA under Eisenhower; Henry Cabot Lodge, who was America's ambassador to the UN, was a large owner of United Fruit stock; Ed Whitman, the United Fruit PR man, was married to Ann Whitman, Dwight Eisenhower's personal secretary. You could not see these connections until you could – and then you could not stop seeing them."

Reporting on wars (direct and proxy), massacres and assasinations connected with something like this only touches the surface and usually serves as a distraction from the few people in power (Qui Bono) who are playing "the game." The manipulators live and prosper while pawns die.

Spitting On the Troops (hint, not on me) and Constructed Memories

Between fake history, military agenda, constructed memories and in inability to show contemporary memories of people spitting on returning troops from Vietnam, I offer you four items 1) Salem witch trials, 2) McMartin Preschool, 3) QAnon and 4) my own memory of the exact opposite type of treatment for three and a half years in the field with near constant contact with a public who had more than enough chance to spit on any of us out in military uniforms or trucks for weeks to months at a time.

I was a geodetic surveyor and we were sent all over the world and all over the US. We were back patted, lauded, offered dates, drinks and meals. Never once did I experience so much as a cross look. And if any military were open, vulnerable and just plain available to any such attacks it was us. And not just me. I also never heard of anyone in the squadron being in such a situation.

The closest anyone came to problems was one of our teams working in upstate New York, near Utica. A civilian asked the team what they were surveying for. Brucey, being funny and smart alecky, told the guy we were putting in a bombing range through his town (we weren't, of course). That got back to a congressman and eventually to our colonel. Brucey was in a bit of trouble (nothing serious and he went on to be a "lifer") and we were in stitches (any trouble with a colonel could be enormously entertaining to us). That was funny for us. Despite the guy going to his congressman, no demonstrators came out to oppose a bombing range.

  • There are some other elements which are very suspicious.
    • Try very, very hard to find any contemporanious accounts (stories done at the time) of demonstrators spitting on returning troops. Look very hard for pictures. Google away. You won't find them. There are a couple of pictures faked with Photoshop of crowds with signs attacking troops. Every time you seen such a picture, look closer.
    • We don't hear the spitting narrative until after the first Rambo movie, from anyone
    • It gets used to justify more war and sending more troops into harm.
    • Any attempt to call this one out is met with enormous directed protest and claims of experiencing such treatment.
    • There are a scattered number of people, now, who make the spat-upon claim. I'll be brutal. I don't believe them, though I can't prove otherwise. I can only show reasons to suspect the spitting claims.
    • I've seen too many years of "recovered memories" and fabricated claims sworn to by supposed victims and eyewitnesses who fully believed their fabrication.
    • There is solid research showing how easy it is to plant fake memories by suggestion.
    • There is also a large investment in the imagined narrative so to challenge that deeply believed fabrication can bring up some very hostile reactions.
    • Intensity does not mean truth.
  • More than anything, this victim narrative is not used to bring help or any recognition to former troops or to lessen PTSD. On the contrary, it is a victim narrative using suffering and death to challenge any challenges to further wars and with those further wars, to more suffering, death and PTSD. So the spitting narrative is used to further the cause of war making by equating criticism and sharp questions about military ventures as the same as hating soldiers.

 

Constructed and "Recovered Memories"

Just say McMartin PreSchool (Google it) to get to accounts of modern day (1980's and 90's) witch trials, not unlike the old Salem Witch trials. Lives were upended and people went to prison based on histerical and fabricated eye witness accounts. Sincere misbeliefs. And not just McMartin. Other pre-schools were caught up in this. This was a time of so-called "recovered memories" in which people supposedly revealed old, buried, repressed memories of horrors. It was all a fraud.

Wikipedia entry (says no convictions but several persons were sent to prison and only recently got out in related types of cases, not all were at McMartin) https://en.wikipedia.org/wiki/McMartin_preschool_trial
Where are they (2019) https://www.oxygen.com/uncovered-the-mcmartin-family-trials/crime-time/mcmartin-preschool-trial-where-are-they-now

People created in their minds events that never happened. Entire fabrications. They believed their fabrications. Their sworn certainties convinced large swaths of people, including news media across the country who reported on this as credible. Decades before hashtags.

 

Palestinians

My introduction to the mere existence of Palestinians was the Munich Olympics of 1972 and the Israeli hostages. They had no context to me and were mindless savages out of nowhere. Or so I thought. It would be years before I even realized who these people were and how badly they had been (and are) persecuted with their lands stolen from them and their liberties, by the Israelis.

 

Iranians and the Embassy Takeover and Hostages

In 1953 the CIA, run by the interests of the Dulles brothers, fomented a coup against Prime Minister Mohammad Mosaddegh in Iran and installed Mohammad Reza Pahlavi as shah. Mosaddegh was the properly and fairly and democraticaly elected leader of Iran. He decided that Iran's oil should belong to Iran and not to various western corporations. That was too much for the US and especially for the Dulles brothers who carried out the coup, installing Pahlavi. The shah instituted a police state with imprisonments, torture and executions. In the west, Pahlavi received great public relations from complicit news media. He was handsome with a beautiful wife and great outfits. And that is about all we saw. Reza Pahlavi was our man in Tehran, modernizing an ancient land.

26 years later the Iranians revolted and threw out Pahlavi. By that time Pahlavi was ill and needed advanced medical care. The US offered him asylum and medical care, despite warnings not to give protection to Palavi, who was wanted for crimes. When Pahlavi entered the US for treatment (which he didn't get and finally wound up in Panama, which was almost the same thing because the US controlled Panama), the US embassy in Tehran was stormed and overrun and those people in the embassy were taken hostage.

When this hit the news, there was no context, not even the very minimal history in the last two paragraphs. The Iranians were portrayed as uncivilized savages who did this for no reason at all. ABC's Nightline started as a nightly coverage of the embassy hostage crisis with Ted Koppel as host. Not once, not even years later, did Koppel explain how the overthrow of Mosaddegh by the CIA and the Shah's police state led to this. Not once. For years I was ignorant of any real reason for such an uncivilized action by this rogue state. There was no explanation of the US role in changing the regime in Iran.

 

Julian Assange, Wikileaks and Whistleblowers

Those who expose corruption and criminality, including murder and war crimes, from within the body committing the offences are people who wind up in conflicting-loyalty situations because (1) they've believed in the higher principles espoused by those bodies and (2) are treated with the contempt we often have for "squealers," "stool pigeons," and other terms of resentment. They are often shunned by their former colleages even when those colleages saw the same crimes but said nothing.

One of the most disgusting acts of cowardice and sucking up from the "press" is the failure to get behind and support Julian Assange. It is venal, cowardly and terribly jealous. Thanks to the CIA's smears of Julian Assange the press is able to justify this by going along with the lies describing him in disgusting ways.

Here is a statement from Julian Assange's partner, Stella Morris.

  Update on Join my fight to free Julian Assange and stop US extradition (Stella Morris, 1 October 2020)
  • Julian and I would like to thank everyone for the kindness that has been shown over the past few weeks. Every message, every action, every show of support means so much to us and we would like to thank you all for helping us continue this fight.
  • It’s a fight for Julian’s life, a fight for press freedom and a fight for the truth.
  • Over the past four weeks the true nature of this prosecution has come to light. Julian is being punished for performing a public service that we have all benefitted from. He is in prison because he informed you of actual crimes and atrocities being committed by a foreign power. That foreign power has ripped away his freedom and torn our family apart. That power wants to put him in incommunicado detention in the deepest darkest hole of its prison system for the rest of his life.
  • Julian faces a 175-year prison sentence. Most of the charges relate to simply receiving and possessing government documents. Under oath, the prosecution concedes that it has no evidence that a single person has ever come to any physical harm because of these publications.
  • Let me repeat that: there is no evidence that a single person has ever come to any physical harm because of these publications.
  • Julian is not a US citizen. He has never lived there. He did not sign an oath to the US government. He should not be sent there.
  • Julian’s duty is to the public: to publish evidence of wrongdoing, and that’s what he did.
  • The US administration is trying to make normal journalistic activities, which are entirely legal in this country, an extraditable offence. If he is sent to the US, Julian will not be able to argue the public interest of his publications because there is no public interest defence. And because he is not American, the US says he does not have free speech protections.
  • The US administration won’t stop with him. The US says that it can put any journalist, anywhere in the world, on trial in the US if it doesn’t like what they are publishing.
  • The US administration is exploiting the lopsidedness of the UK-US Extradition Treaty to deny justice to the family Harry Dunne, and to force cruelty and injustice on ours.
  • This case is already chilling press freedom. It is a frontal assault on journalism, on the public’s right to know and our ability to hold governments, domestic and foreign, to account.
  • Terrible crimes were committed in Iraq and Afghanistan and terrible crimes were committed at Guantanamo Bay. The perpetrators of those crimes are not in prison. But Julian is.
  • Julian is a publisher. Julian is also a son, a friend, my fiancee and a father. Our children need their father.
  • Julian needs his freedom. And our democracy needs a free press.

 

 

 

 

Focus on Information

Regardless of technical means, the core purpose of media on the web is to get information across. No gimmicks, overcoding or any kind of "dazzle." That can always be added but the foundation is information. Almost always (always) your visitors visited the site for the purpose of getting information. They are not there for you to show off. They don't need to be tricked into some message. They are there for information. So provide information. It doesn't have to be dry, just direct and informative. The technology must support information, and not get in its way or attempt to substitute itself.

When I was changing majors in the summer of 1967 and sitting in a broadcast journalism class I learned that we were lucky if listeners retained 5% of what we read on air. We were given three simple rules designed to help listeners and viewers retain the information we spent so much effort to gather:

  1. Write three versions of the story, each worded a bit differently to feel fresh when the story is read again, usually across an hour and then repeated.
  2. Structure the story into three sections:
    1. prepare,
    2. say,
    3. review
    ,
    or as it was said to us:
    1. Tell them what you are going to tell them.
    2. Tell them what you have to say.
    3. Tell them what you told them.
  3. Avoid pronouns or non-specific nouns. Use full identifications and keep repeating the identifications:
    1. such as
      1. Jo Jones .. followed the next time by .. Jo Jones .. and so on...
      2. KCMO .. followed the next time by .. KCMO or Kansas City, Missouri
    2. not:
      1. Jo Jones .. followed the next time by .. she
      2. KCMO .. followed the next time by .. the city

Here is a two paragraph real world example from a HuffPost article writing about the course of "Grey's Anatomy" on television where this is not followed - commonly:
https://www.huffpost.com/entry/meredith-grey-covid-19-greys-anatomy-end_n_5fbd35c2c5b61d04bfa48bcf

“Grey’s” was an instant hit when it debuted in 2005 with original cast members Pompeo, Sandra Oh, Patrick Dempsey, Katherine Heigl, Chandra Wilson, Justin Chambers, T.R. Knight and James Pickens Jr. In the years since, most of the main players have either left or been killed off the series, except for Pompeo, Wilson and Pickens Jr., who now star alongside a slew of supporting players including Camilla Luddington as Jo Wilson, Kelly McCreary as Maggie Pierce, Kevin McKidd as Owen Hunt, Caterina Scorsone as Amelia Shepherd and Jesse Williams as Jackson Avery.

It is not abnormal for “Grey’s” to put its characters in life-threatening situations, and many of the doctors on the show have faced near-death experiences, heartbreaking losses and tragic ends. George O’Malley (Knight) died following a bus accident that left his face unidentifiable. Lexie Grey (Chyler Leigh) and Mark Sloan (Eric Dane) sustained fatal injuries in a plane crash. And Derek Shepherd (Dempsey) was taken off life support after his car was hit by a semi-truck.

  • The first paragraph introduces us to a series of cast names.
    • Think of this list of cast names as a list of nouns on a memory test.
    • Could you remember/retain them?
    • Really?
    • For how long and especially in the middle of a lot of story context.
  • Already in this first paragraph the writer assumes we've stored and recall (like a computer) whoever Pompeo is (from a previous paragraph) and that we've already gotten "Pompeo, Wilson and Pickens Jr."
  • The second paragraph also assumes that we've memorized all the names in the first paragraph, in full:
    • "George O'Malley (Knight) ..." This bit of text is only the next paragraph but it is unlikely we've data-stored the name of T.R. Knight, or any other actors well enough to make a last-name only reference, which is the "standard" style and the "standard" of information transferral in most journalism.
    • Even Menza members would have a tough time on a memory test remembering the names of the cast members in the first paragraph, or any list of nouns that length for any period of time, even the short amount of time to go to the next paragraph.
    • It really isn't that hard to write "George O'Malley (T.R. Knight) ..."
  • Repeat identifications, unless they get in the way, with the intent of making sure your readers or listeners have a chance to retain more of your story. You, the writer may think you are overdoing it but have you really thought about how this is being received by reader or listener?
    • You put a lot of work into that story.
    • Why throw away so much of your precious "shoe leather?"
    • Throwing "it" out there is no assurance your information was either received or, if received, retained.

Watch PBS or listen to NPR and you will almost never hear the prepare/say/review structure or full noun references. I find it very frustrating, not to mention angering, leaving me asking who or what was it. Our brains are not data recorders. We don't remember, assuming we noticed, the names at the start of the story which never get repeated. So, I ask myself, who dropped the educational ball? Ask your self at what point in the story you listened to, did you start paying attention (usually after a lone identification, leaving you with the following he, she, it, the council, the city, and so forth), wanting to know who or what was just being talked about (identification) and what do you remember?

You will come up short and most of the time you don't notice. Most readers or listeners just pass over this spot and continue with information which is now disconnected from its object. Redundant identifications not only help comprehension for audio stories, redundant identifications increase comprehension for stories delivered in text form. This also prevents having to stop where you are reading and look at earlier text to get a handle on the indefinite pronoun or partial ID (i.e. "Jones") later in the story. After a bit it is common to ignore this, continue reading and come away short.

Tools of the Trade

The most important tool is "subject knowledge." How well you know your subject makes the difference. Just looking won't do. Looking and hitting motor drive (continuous drive in a digital camera) won't work because the clock mechanism can't see and most importantly can't hear. Further, if you don't know the subject you don't really know what to shoot and after shooting you still don't know what to pick.


For an extended treatment of shooting for dance click on this link:
http://www.mikestrongphoto.com/CV_Galleries/PhotoCV_SubjectKnowedgeForDance.htm

Cameras never shoot your pictures. That comes from knowledge. The right cameras can facilitate your shooting. The wrong cameras can cripple your shooting. The actual "moment" of shooting is a matter of your brain, your "flow" your body sense and an ability to really see that tiny moment as having dimension, texture and space. The fastest moving environment demands the most patience and deliberate immersion in the scene.

Today my device "kit" consists of several video cameras with a mixer or two, mics and radio mics, digital audio recorders and several digital still cameras, sometimes with lights either on camera or studio setup (strobes and continuous) and maybe a background set. That usually covers most anything. The video cameras include a Sony XD-CAM which I usually use in 4:2:2 sampling as well as a brace of my old Sony FX-1000 HDV cameras, a new BlackMagic Pocket Cinema Camera, a small GoPro and three 360 cameras, two of them 4k. The radio mics are usually placed on the stage apron to avoid synch problems (speed of sound means that every 35-40 feet the sound is one frame behind the action).

I use headphones which cover the ears to monitor the camera sound. The ear cups isolate ambient noise from the sound source. They must use passive isolation only. I realized this after I didn't notice for 5 minutes when the house dropped my board feed years ago because I was using noise-canceling headphones, new tech at the time, which fed me the ambient sound, not the sound from my camera. I shot and developed film from 1967 until my first digital camera in 1999 or so.

In the early 1990's I shot and developed film and used a film scanner. Now I shoot all digital, in raw+JPG. My Nikon D850 (46.7mb) is full frame with my older D7100 (aps-c) as my backup for dance and my Z6 as full-frame for mostly non-motion subjects. All the still cameras shoot video and the full-frame 4k video from the D850 and Z6 is awesome allowing me a lot of flexibility especially with my 16-28mm ultra-wide zooms, my favorite focal length. I don't know anybody today, including me, who seriously wants to go back to film for regular work. Digital has already long surpassed film.

Hardware Usability


This is a picture of a "press camera," one of the best ever camera designs in terms of robustness, handiness, reliability and flexibility. You can see everything clearly and you can operate everything easily. This was the camera I learned on in the summer of 1967. I used a Crown Graphic (still have it) for years along with my 35mm (once considered miniature format) and medium-format (120, 220 and 2-1/4 x 3-1/4 inch sheet) cameras.

This particular press camera is a 4x5 "Crown Graphic," which you can read on the top lens lock holding down the lensboard, complete with lens and between-the-lens leaf shutter. A "Speed Graphic" is almost the same but has a "focal-plane shutter," like a window shade in the back and would show a wind-up mechanism on the right side. The bracket showing on the right side is made to hold a large flash gun which can be detached to point the flash as needed and which will trigger the shutter with a solenoid. The film used is in sheets, 4x5-inches in size, loaded two at a time, back to back, in a film holder.

The front standard (holds the lens on the lensboard) can rise or rotate up and down and left and right and move left and right thanks to the bellows connecting from back to front. The front focusing rack, which is mounted on the inside of the front cover which folds down, can be angled downward to lower the lens and also to bend the lens to angle the focal plane. The casing shell is a mohagony box with a front panel which folds out to reveal the camera you see above.

The viewfinder at the top is to look through and the two round windows form a split-image rangefinder for precise focusing which can even be used in the dark by turning on a small bulb to send out a pair of images of the filament which is in focus when the filament images are brought together on the item or person you are focusing on. The back is a spring-loaded film holder which can be opened in the back to focus on the ground glass or moved off the camera to bring in roll-film backs or cut-sheet holders or Polaroid backs. There is a lot more I could tell you and in person I love to bring out my own Crown Graphic.

Even though the 4x5 press camera was being edged aside by 2 1/4-inch square (film negative size) twin lens reflexes and 35mm cameras, a major reason the 4x5 camera was still used for the photo class was because the controls were clear and easy to control. It was absolutely dependable.

At some point bells and whistle stop being helpful and just get in the way. Simple doesn't have to mean plain, just reliable.

And 50 years later, the same reasons the Graphic give me a way to make a point about usability design, where the device matches the physical proportions of the humans who use this. This picture shows you an very usable machine. It is great in the hand. All controls are readily visible and easy to work. It is also joy of design, clearly derived from working experience.

  • The better that physical size and controls match human dimensions the better and more efficient the device is to operate.
  • The more universally consistent (standard) the operation the more useful, and more masterful, the operation can be.
  • The more design efforts attempt to impress by appearance, rather than function, the more awkward and error prone the operation.

Notice: directness, visibility and physical size are central to usable design. The point in this section is to understand that access to technology includes productivity as affected by physical dimensions and clearly understandable controls. This is the opposite of a great deal of supposedly great design. I really, really hate design that looks cool but is all but impossible to figure out. Most of my work in concerts is spent operation cameras and mixers and digital audio recorders and microphones of various types --- in the dark. I have to have devices I can handle without going to menus, even simple menus, but which have controls I can manipulate by feel and habit.

A Short Annecdote About Selecting

When I took that class in the summer of 1967, the teacher Frank O'Neill related a story about being one of several young photojournalists with nice, new roll film cameras with which they could shoot an entire roll of 2-1/4-square "120" roll film (12 exposures) rapidly just by clicking and cranking. But "the old guy" on staff, who had been there since Moses and before, managed to get the front page picture, "over the fold" (top of page) again and again, even against their rapid shooting, even though he was using a sheet-film press camera (Speed Graphic) such as above.

They finally came to realize "the old guy" simply knew what was coming up, knew where and when to position himself and then waited for the single exact moment to shoot. Shooting more volume simply gave them more work to process and didn't by itself give them any better choices from the contact sheet. The key was not how rapidly they could crank out exposures, it was how exactly each shot was selected when shooting.

FAST - Focus, Aperture, Shutter, Take

We were taught an acronym for "rapid" shooting with the press camera. FAST. It was a mini-checklist to be carried out with each sheet of film shot. You might notice it doesn't include inserting the dark slide and pulling out the film holder, then putting in either the other side of the film holder or another film holder with unexposed film and pulling the dark slide to get ready, storing the slide in the spring clamps on the back of the camera.

  • F - Set the focusing rack to the distance of the subject and lock it with the track lock
  • A - Set or make sure the aperture is at the right f/stop (if set previously, always, always recheck that it is still there)
  • S - Set the shutter speed (or check to make sure it is still set right), then cock the shutter so it can be fired
  • T - look through the viewfinder, compose your shot, wait, at the exact right moment, press the shutter release, on the camera, or the flash or the cable
Then rinse and repeat. If you get this right you can churn out more than one picture a minute. That may seem ridiculously slow compared to cameras today which take 7 or more frames a second, but, remember, the selected moment is the entire point. If that isn't right it doesn't matter how many frames you shot, because none of them count if they are not right on the moment.

 

Still Cameras for Dance

The single most important feature is ability to see the action directly. The viewfinders of all film cameras allow this. The viewfinders of Digital SLR (DSLR) cameras allow this. The viewfinders of mirrorless digital cameras do NOT. There is considerable delay between what is happening and when you see it. For dance, avoid cameras with electronic viewfinders. The only exception would be something like the Fuji with a combined optical viewfinder with a digital overlay to show settings.

Remember: The camera doesn't shoot the picture. It either works with you or works against you.

Remember also that dance may be among the most technically demanding types of photography. It requires knowing your equipment like the back of your hand as an assumed basic starting requirement rather than a result. Then, dance requires the knowledge which lets you pick exact moments, in the moment (not later). Once dance photography was under my skin, almost all other genres bored me. They just went flat for me. Even shooting dance "action" shots as setup in a studio with strobe light, poses and a dancer jumping in place in front of a backdrop, rather than moving through a production (which includes immersion in a more total environment).


KU's UDC (University Dance Company, fall 2019) Above left, posed leap lit by studeo strobes as part of dancer head/action shot combinations. Above right, leap during rehearsal of choreography.

Mike Strong with Canon rangefinder in 'serious art' selfieMike Strong with Leica M2 on base in Cheyenne in barracks room probably 1970 or soThe first professional camera I owned was a Canon rangefinder in 1967 (me, in mirror, at far left and to its right me in the Cheyenne on base, quite serious by now, with one of my Leica M2's) but it had shutter-bounce problems the store couldn't fix and I wound up with Nikon (whose lenses still work on my Nikon digitals). I am agnostic about brands and for the most part about other camera features. I learned 50 years ago in Cheyenne, Wyoming, where I worked part time in Great Western Camera Exchange when I would come back to base from a TDY (Air Force for temporary duty, I had a job [geodetic surveyor] which kept me traveling for most of my four years) - that the best camera is one you feel comfortable with in your hands and that you care enough about in terms of interest that it is natural for you to operate.

I got some of my best camera deals (Leica M2's for instance, one of them on the left, right-side picture) from people who had asked for the "best" camera and were sold (in all sincerity) a camera which was too complex, so they put it in a closet for years until they would bring it in to sell it. The question they were really asking was "What camera will give me dependable pictures of me and my family without requiring me to relearn it each time I take it out?"

Using a mirrorless camera for dance is a flat out no-no. One of the few equipment recommendations I am not agnostic about. I will explain in detail (and mean to type up a script showing why a DSLR is far better for dance than (almost) any mirrorless camera). I can show you one of both in Nikon, my Z6 mirrorless and my D850 or D7100 DSLR.

You will save yourself enormous frustration by getting a DSLR instead of any mirrorless camera. You will also just have to put up with the audible noise.

 

Note: about being obvious in a dark theater:
1 - any mechanical shutter, even from the famed near-silent Leica rangefinders, are audibly noticeable in theater. Metalic click sounds just stand out, even at very low volume.
2 - in a near totally dark theater the single-LED focus-assist light on a digital camera lights up the entire room each time you press the shutter. Turn it off (in the camera menu).

Equipment can't make the picture for you, though from time to time it can stumble on just the right frame regardless of equipment.
The right equipment will let you work when you want, as you want (mostly).
The wrong equipment will get in your way again and again and will keep you frustrated trying to get the exact moment - and if you have nothing else to compare you might not even realize the source of why it is so tough to shoot at the exact moment to get the exact shot you were after. Moving from an EVF camera to my first DSLR 20 years ago was a revelation to me.

A DSLR (Digital Single Lens Reflex [a "reflex" reflects the image through a lens onto a focusing screen) shows the scene as it happens because your eye is looking directly through the lens (via the mirror and the eyepiece). So what you see is happening as you look at it. No delay in what you see. (note: there is a slight delay after pressing the shutter button for the mirror to slap up, out of the way and then for the shutter to trigger.

A mirrorless camera sends image information from the image sensor to an EVF (electronic viewfinder), a tiny television monitor. The electronics between the sensor and the display produces a ton of lag time between what happens and what you see on the viewfinder. That lag is added to the same slight delay for the shutter (this is similar to a DSLR). If you have more than one television at home you may notice a difference in timing when they are on the same channel, and even several seconds (3-6) worth of timing difference if the feed is being sent through another tuner, such as my DVD/VHS recorder which can record programs off the air.

At home it is common for me to hear the start of a dialog in one room and to walk into another room and hear the same words get spoken again, a few seconds later. It can be a bit disorienting. You would think that couldn't happen with electric signals which travel at light speeds and yet, the electronics used to process and output the picture take some time between acquiring the signal off the airwaves, or internet or cable (all sources of delay). However we are all familiar with the need to buffer signals over the internet and with interruptions in the process and delays in audio/video when participants in a Zoom meeting are talking. The electrical distance between camera sensor and camera viewfinder is much shorter and far more direct, but still produces a delay.

The lag is bad enough that the motion can easily be 4-7 feet off from what you thought you were shooting when you clicked the shutter. When I first got my Nikon Z6
I was hoping for a camera which would be silent, avoiding the sound of the mirror slapping up to clear the way between lens and sensor. I've badly wanted a silent camera since I first started shooting performances. I knew there might be some lag but I hoped it would be easily workable. I first took it out the box for a dance class that Nicole was teaching in Hays.

I should note that the lag I am talking about is very large for me, compared to the extremely rapid response from even the oldest of my DSLRs. Almost all other photographers don't see the lag or think it too small to worry about. The sales photog will try to tell you it is fast and you don't have to worry. Don't even try to argue the truth. They are true believers and not practicioners. Few photographers shoot dance at all and even fewer shoot dance in rehearsal, production and performance and usually they will default to using "continous drive" to get a series of frames they can pick from later, assuming that is the best they can do for something so fast as dance. I argue just the opposite and as such usually find myself in the heretic's chair. Trying to argue with the 99+-percent of non-dance photographers that they have to choose each and every shot, regardless of how many and how quickly, is like pulling teeth.

Nicole was teaching a skirt dance. So, in the viewfinder when she pinched the hem of the dress off to her left side I shot the frame but I didn't get what I saw. What I was shown in the electronic viewfinder had already happened and was gone. The picture I took was a full arc of cloth up and over, completely covering her face. The viewfinder showed a scene which was behind the action by a good 5-feet of movement.

Symphony performance 7:30 pm with Shah (Shokhrukh) Sadikov conducting Saturday 9 Feb 2019, here with his second piece, third in the program, "Pictures at an Exhibtion" by Modest Mussorgsky. Photo, copyright 2019 Mike Strong with full usage permissions to Shah and the Symphony. Symphony performance 7:30 pm with Shah (Shokhrukh) Sadikov conducting Saturday 9 Feb 2019, here with his second piece, third in the program, "Pictures at an Exhibtion" by Modest Mussorgsky. Photo, copyright 2019 Mike Strong with full usage permissions to Shah and the Symphony.
The Hays (Kansas) Symphony performance 7:30 pm at Fort Hays State University in Hays, Kansas- Left: warming up pre-concert (Nikon D850 DSLR, 16mm ISO-4000) and right (Nikon Z_6 mirrorless, 200mm ISO-6400) with Shah (Shokhrukh) Sadikov conducting Saturday 9 Feb 2019, at right in his second piece conducting, third piece in the program, "Pictures at an Exhibtion" by Modest Mussorgsky. Photo, copyright 2019 Mike Strong with full usage permissions to Shah and the Symphony.

The specific reason I ordered the mirrorless camera when I did was to photograph the Hays Symphony Orchestra. The conductor (Shah, from Azerbaijan) hired me to shoot and I figured a silent camera would be perfect. But even here, it didn't work. Not so much on the musicians, who didn't move that much anyway, but on soloists who actually move more than you would think and especially on the conductor. The conductor's baton would show far to his left in the EVF and I would shoot and find out that the moment of pressing the shutter button he had already swept the baton way to his right. Big, fast sweep.

To get the baton balanced in front of his face (above, at right) I had to shoot well before his hand was in that position, and discard numerous frames, guessing when to shoot. Guessing and discarding so many frames is not something I would normally need with a DSLR. This was when the mirrorless body was brand new to me so I was still learning to use it, even so the problem persists of a large (for me) time lag between actual action and what is shown in the electronic viewfinder.

Nikon Z_6 mirrorless at VIP Reception - KCFAA Race, Place & Diversity Awards Dinner 2019 honoring Ben Jealous (center, at left), former National President and CEO of the NAACP and former Democratic candidate for governor of Maryland, at Westin Crown Center ballroom Friday 24 October 2019. At left on the left and at right on the right is Debbie Brooks, President Kansas City Friends of Alvin Ailey. Photo, copyright 2019 Mike Strong with full usage permissions for KCFAA and persons in the pictures.Nikon Z_6 mirrorless at VIP Reception - KCFAA Race, Place & Diversity Awards Dinner 2019 honoring Ben Jealous (center, at left), former National President and CEO of the NAACP and former Democratic candidate for governor of Maryland, at Westin Crown Center ballroom Friday 24 October 2019. At left on the left and at right on the right is Debbie Brooks, President Kansas City Friends of Alvin Ailey. Photo, copyright 2019 Mike Strong with full usage permissions for KCFAA and persons in the pictures.
"Grip 'n Grin pictures using the Nikon Z_6 mirrorless camera and an adjustable LED light panel on the top of the camera at VIP Reception under very challenging light conditions ranging from full daylight outside the rooftop windows to deep differences in skin tone combined with shadows areas in the middle of the penthouse club room. - KCFAA Race, Place & Diversity Awards Dinner 2019 honoring Ben Jealous (center, at left), former National President and CEO of the NAACP and former Democratic candidate for governor of Maryland, at Westin Crown Center ballroom Friday 24 October 2019. At left on the left and at right on the right is Debbie Brooks, President Kansas City Friends of Alvin Ailey. Photo, copyright 2019 Mike Strong with full usage permissions for KCFAA and persons in the pictures.

There are places mirrorless is fabulous.

  • "Grip 'n Grin" pics where I want to fully see the image and not guess at exposure, so all those handshake pics can be sent to funders and sponsors. It also massively reduces post-processing effort to rescue the under or over or unevenly exposed frames you get with DSLR's even when using flash fill. And, those are posed, meaning I have a lot of time to make it right, according to what I see in the electronic viewfinder, before I push the button.
  • Backstage when it is very dark (this particular camera is full frame but with half the pixels, meaning each pixel is much larger, which in turn results in much better very, very low light behaviour and very silky tones instead of noisy (grainy) tones.

Actually I did have to use the Z_6 with American Youth Ballet one night when the D850 locked up for some reason and I didn't have the time to deal with it. I wish I had also had my old APS-C (half-frame size sensor) D7100 as a backup camera (which I normally do), because it is very quick and nimble in operation. That night my only other body was the Z_6 mirrorless. Crap! What I figured out how to do was to use my left eye at the viewfinder just to maintain the framing, and my right eye looking over the camera along with listening to the music to determine when to hit the shutter. Very awkward but it did work. I just never want to have to resort to that again. Really a pain, but it gave me a technique I could use.

About using both eyes open. I learned to do this long before I had a camera in my hand. This was the method for looking through microscopes, to avoid eye fatigue from squinting. Eye fatigue can also produce headaches, as in the old CRT(cathode ray tubes)-style monitors. When I switched from cathode-ray-tube displays to flat panel LCD, then LED monitors and laptops my workday headaches stopped. That is because the eye is fast enough it can track the raster-scan beam and try to focus on it and keep up. We see the full picture but the eye muscles are going like crazy.

That said, I am right handed and left eyed, called cross dominant. In any case, when you keep both eyes open on a microscope one eye takes over the the other is just kind of left out of the game. Something our brains do. It also works for cameras, and in the case of using a mirrorless camera to shoot dance, I was able to deply each eye for a slightly separate task. Awckward, workable but I will take the DSLR.

Otherwise I also tried estimating where some move was going and shooting far far earlier than I would shoot with a DSLR. With a DSLR I can make very fast, last-fraction-of-a-second changes in timing (the dancer is not always in the same place in the beat or off beat). With a mirrorless I have to throw the dice and hope they land where I anticipated. We are so used to shooting looking at only one viewfinder that my left/right eye split method of shooting with the mirrorless camera is a bit hard to pull off but you can get used to doing it, if you have to.

All in all, after all these years, I do not have a good, all-around method of shooting dance silently. I still want that. Kind of a holy grail. But I am pretty much resigned so far, after working with the Z6 (which I DO love in its own right), that a mirrored DSLR is what I need to work with. It is the best all around machine for timing when to shoot and as a lens platform. That last, a lens platform, means you have to think in terms of the system. The new mirrorless cameras have new lens mounts with the introduction of some extremely expensive glass which, to be honest, haven't impressed me as taking full advantage of the larger diameter of the mounts.

The supposed advantage of allowing extra-fast lens designs is not shown in the still average apertures offered. Besides, the existing and early (way back in the 1950's) wide lenses, including Canon's f/0.95 lens on their rangefinder, and the still standard f/1.4, worked just fine in the lens mount diameters used from those days to the present. So all I see in that direction is extra expense without added advantage. No one has even suggested still wider apertures in the future for these new wider lens mounts, which makes the reason given for the larger diameters specious.

Camera Basics, No Frills

Digital cameras added a lot of frills, but no matter how fancy, all the bells and whistles work around the same set of basics cameras have always utilized.

  • ISO - a number indicating sensitivity to light of the film or the sensor.
    • The lower the number the more light is needed but also the smoother the tonal scale and the more defined small details.
    • The higher the number the darker the shooting environment with a penalty in less detail and a more contrasty tonal scale.
  • Shutter speed - the amount of time light has to enter the camera to form the image
    • fast shutter speed used to "stop" motion needs a lot of light and/or higher ISO or needs very large diameter apertures (small f/stop number)
    • slow shutter speed uses lower light but will also blur objects in motion
  • Focal distance - the hyperfocal distance is the exact spot of maximum focus, depth of field is 1/3 in front and 2/3 in behind the point
  • Aperture - like the eye's iris, the diameter of the hole through which light travels.
    • The smaller the diameter:
      • The more the depth of field (distance in front of the camera which is in focus)
      • The more light is needed
      • The larger the F/stop number (focal length of lens / diameter of aperture) i.e. 50mm / 3.125mm = f/16
    • The larger the diameter:
      • The shallower the depth of field (the area in focus is very slim)
      • The lower the light you can shoot in
      • The smaller the F/stop number (focal length of lens / diameter of aperture) i.e. 50mm / 25mm = f/2.0
    • On zoom lenses, try to buy only constant aperture lenses if you are shooting in low light and theatrical situations. Most zooms, less expensive ones, have apertures which vary as you zoom. So an f/3.5-5.6 lens (fairly standard) may be used wide open at f/3.5 at its widest zoom setting. When you zoom to the narrowest angle of view the f/stop drops automatically to f/5.6, a bit less than half the light you started with, a full stop of underexposure. Constant aperture lenses keep the same aperture (and exposure) but are more costly. If you don't have the money at first then you will have to do some adjusting. One method is to set your ISO and shutter speed to use the largest f/stop of the lens when fully zoomed (f/5.6 in our typical example). That way zooming out doesn't unexpectedly change your exposure.


In the aperture settings above, if we assume a focal length of 50mm then the diameters at each aperture setting are:

At a focal length of 50 mm
f/stop mm diameter area in
square
mm's
f/16 3.125 mm 7.7
f/11 4.545 mm 16.2
f/8 6.25 mm 30.7
f/5.6 8.93 mm 62.6
f/4 12.5 mm 122.7
f/2.8 17.86 mm 250.5
f/2 25 mm 490.9

In addition to f/stops, a further adjustment for the percentage of transmission through the glass is an aperture rated as a "T-stop."
For our purposes we will leave it at a simple FL / Diam = f-stop.



For these two lines of figures we set the focus at the same distance. The aperture selection determines the "depth of field."
The top line of figures shows the focus when wide open, such as a f/2 aperture (at top right).
The bottom line of figures shows the focus with a small aperture, such as the f/16 illustrated at top left.

 


This photo was shot using manual settings from the rule of thumb for sunlight.

A rule of thumb for outdoors

With the sun behind you, shining over your shoulder, the standard rule of thumb is centered around f/16.

  • Aperture: f/16
  • Shutter speed = 1 / ISO
  • So at ISO 400 (like old Tri-X film) use this setting in daylight: f/16, 1/400th sec
  • for open shade open by two stops (i.e. in example go to f/8)

Note, stay away from automatic, even semi-automatic settings. The manual settings are really simple enough, don't change unpredictably as you swing the camera and allow you to shoot freely for a specific light level across a scene which may have varying amounts of lighting behind the subjects.

What is now called ISO is directly related, in terms of numerical values, with the old ASA ratings for film. There are numerous sensitivity rating numbers on different systems. They have all been supplanted by the ISO rating for film sensitivity.

A good starting point for most theatrical lighting starts with an ISO of 3200, a shutter speed of 1/160th (if you can) and a wide open F/stop, such as f/2.8 on a regular medium zoom with constant aperture.

The viewing screen on the back of the DSLR is a great way to check your exposure. Unlike film where you need to use, the rule of thumb, a light meter or, the old version of instant display, Polaroids, which took 60 seconds to get a print and then you did a mental calculation translating the ASA of the Polaroid film to settings for the ASA of the film.

 

Above is one of my mixers. It is a medium size mixer you might take on a job. Usually, on location, I carry a very small, 4-mic, 6-channel mixer which can be powered by battery.

If you think all the rows of knobs and volume sliders (pots) are complex you couldn't be more wrong.
I learned this in a radio station job back in 1967-68.

Above, on the far left, I took this picture of Ron Kruse at KTTT in late 1967. KTTT was my first radio job. I was already carrying a camera anywhere, even at the radio station, in this case a Canon rangefinder with 50mm f/2 lens. On the left edge of that picture you can see a rack of continous-loop tape "carts" (cartriges, like 8-track) for commercials and spot announcements, a 7.5-inch reel-to-reel tape recorder is behind along with a rack of LP's at hand (for the day's playlist, usually some from the station and some from the DJ). There was a library wall of LP's to the left of the picture, behind the operator's chair. I spent a lot of hours on this board and engineered for a two polka shows. During my reporter hours there was a small sound booth with a window just behind my camera position and I would read news from inside the booth.

The board we worked with is under Ron's mic (not visible in this shot) and looks almost exactly, as I recall, like one of the two pictures to the right. It seems there were two VU meters but otherwise note the round pots (potentiometers / volume controls), each with a 3-position switch just above (off, cue, live) and the panel above with input selection switches and the meter(s). Because the purpose of each pot was fixed (a factor of having enough pots) the operation was clear and smooth with few errors.

The radio station, KTTT in Columbus, Nebraska, my home town, had just replaced its main control board. The old board was now in the production room for creating commercials and other recorded messages. The old board had only a few knobs (pots)and each knob had multi-position switches and some of the switches had switches. Each pot could have several functions, controlling various equipment, depending on the positions of the switches. You could make mistakes easily by not getting the right combinations of switches. Luckily, in the production room you could just re-do your recording.

In the main control room, the shiny new board had rows of round-knob pots (volume controls). Each pot went to one input device, a tape machine or microphone or turntable or remote line. Each was clearly labeled. This discrete (individual) assignment of function to specific, standard pots made it easy to learn the board and very quick and smooth to operate. There was seldom any confusion on what you were feeding to the airwaves. The set of pictures just above show boards almost exactly like the board at KTTT. Above those pictures are two shots of one of my medium-size mixers using modern sliders instead of the earlier round knobs. Each slider controls a specific device and each knob in the row above has the same function in each row for each slider below. The set of sliders in the right middle sets equalization with each of those sliders controlling a specific frequency range.

I encountered the same situation in early summer 1974 at WGVA, Geneva, NY where I had just been hired as a reporter. Only this time I was there for the old board transition to the new board. Once again, the new, transistorized, modular board with its gleaming rows of many, many pots, seemingly intimidating, but each controlling its own device and well labeled, actually made the board so much, much to work. By the way, the old board, which was to be used in the production room, with its tubes and components, was cleaned of its old dust by taking it to the parking lot, leaning it against a tree, soaking it with Formula 409 All-Purpose Cleaner, rinsing it with a garden hose and letting it drip dry for a week. Worked fine, in the production room.

Fitting the Tools to Humans


LEFT: Picture from 1981 Radio Shack Catalog ---------- RIGHT: One of my video cameras. Note the visible controls, the hand-controlled lens rings and the large white lettering (instead of the more usual hard to read, "cool" tiny black lettering)

I owned one of these Radio Shack TRS-80 Pocket Computers (made by Sharp). I bought it in 1980 for a mere $249. I also bought a docking stand and a printer as well as a tape cassette machine for storing and loading programs. The pocket computer had 11K of ROM (for BASIC) and 1.9K of RAM (for user programs).

Oh, the Pocket Computer had a real user's manual
printed on real paper!
made from real trees,
growing in a real forest.
NO PDF.    (actually PDFs [portable document format, a precursor to universal web pages and invented by Adobe] would not exist for another couple of years.)

The Radio Shack pocket computer was roughly the same size as a smart phone but unlike a smart phone, the screen was a single line, 24-character, backlit, yellow background LCD showing uppercase characters only and no color.
The smart phone is often nothing but screen with line length limited only by font size, screen size and orientation.

The smart phone's keyboard is an onscreen keyboard, prone to typing errors, painted on the screen as needed.
By contrast, almost the entire face of the pocket computer was fixed, hardware keyboard with real chicklet keys that had a tactile feel. Typing was relatively easy and although the size required a certain hunt and peck style, the typing was far more accurate.

I wrote a lot of programs including astrology-chart making programs, both solar and sidereal. I was very into things like that at the time and didn't have $2500 for a PET and a commerical program, so I wrote my own. (PET stood for personal electronic terminal, actually one of the first personal computers [PCs]) That one line screen got a lot of scrutiny from me and the printer gave me a lot of printouts for editing. But it worked. Before long I got a Commodore VIC-20 ($300) and a monitor hookup to my television. Then a Commodore 64 ($595, 1982) and a real computer monitor. Both had a hardware keyboard built into the computer (they were both basically computers inside a keyboard).

Because it was so much easier, and such a relief, to be working with a full-size keyboard and a real screen, I quickly left the pocket computer behind . That was before 1983. I now have desktop computers with 32-inch monitors or laptops plugged into a 32-inch monitor. As my own home office monitors have gone larger, most of the world's screens have gotten tiny and portable, as phones which do a lot more than phone. Their interfaces, with onscreen keyboards have gotten difficult to see fully and error prone to type on. It feels as if the small screen world is almost 40 years backward. Tiny screens and pads have become such a norm that the newest generation of tech wizzes think this is normal. Then ...

In early 2019 one of the weekly (or so) newsletters emailed to me from iZotope, which makes one of my favorite audio editors, RX6 and now RX7, carried an article from the editor / sound engineer about his great revelation in working. Suddenly, his work became so much easier. He had been using a tablet for years and from his writings seemed to consider a tablet the norm. Like a smart phone, the tablet generates an onscreen keyboard and onscreen controls to edit or control sound. The writer / audio engineer's revelation? He had just started using his own software on a regular desktop with a full size screen, a hardware keyboard and a hard wired separate mouse. It was free, easy and quick in operation. He could do so much more without being tired and get it done in less time. He was recommending this brilliant new knowledge for everyone.

I'll admit to a totally unworthy sly chuckle on my part, thinking the author must be a kid, you know, 30 or so, and this kid is learning the lessons I learned 30 or 40 or 50 years ago. Sometimes that is just the only way you get this kind of learning. Ask me, I was those ages.

To a lesser extent the tablet form is also usable as a production device but currently hampered by input devices (such as visual-touch keyboard rather than hard-key touch, for faster typing and for the visually impaired), working memory, storage, slow central processing units and so forth. Often, over the years, new developments are simplified versions of existing tech, just to wipe out the cobwebs. Then, the new and simple version starts getting complicated as features find their way into the applications because certain utilities are needed. Before long the new stuff is looking like the old stuff, with a different name. I've watched that happen several times over.

So, it is reasonable to expect these input and storage limitations to be overcome in the future in order to have a more useful tool for many uses and to increase in size to something more usable for graphic arts. In the Windows desktop arena, Hewlett-Packard introduced a 20-23-inch (diagonal measure) touch screen desktop computer with a physical arrangement much like the iMac. Clearly someone is looking at a workstation rather than a newstand and party line.

 

Delivery

Despite the wide range of delivery venues today, from stage to screen to phone to event effects and even smart watches, most "multimedia" delivery we experience comes to us on web pages with a mix of text, images and video. Those may include Facebook and Facebook-owned Instagram ($1b in 2012), WhatsApp ($19b in 2014) and Oculus ($2b in 2014) or the still independent Snapchat and Twitter. These need to work together and often as interactive presentations. In this page we will go over the major details needed to understand practical usage for text, images, audio and video. First, the start of the term then a look at earlier media sensory flooding, long before today's term.

Then there are delivery methods: web pages, phone apps, social media, streaming services, picture posting portals, video and printed material. Posting portals include Facebook, Flickr, SmugMug, YouTube, Tumblr, Blogger, Reddit, Embed, LinkedIn, Mix, Vimeo, Twitter, Instagram, Snapchat, Pinterest, or teleconferencing with Zoom, FaceTime, Skype, HouseParty, Hangouts or streaming services such as iTunes, Stitcher, Spotify.

Web pages remain a primary means of delivery with individual "apps" (applications, i.e. programs) playing a major role although often accessing web pages. Paper, for centuries has combined text with images. In the 1980s, as printing was changed by desktop publishing. Even though typsetters (the machines) were controlled with electronic text, using a precursor to HTML (and the original model on which HTML was based), were being displaced by electronic documents using technology developed for Xerox duplicating machines but with lasers rather than copying lenses. This is where the first laser printers came from. The new typesetting machines were called imagesetters because each page they "set" was drawn as an image containing any images as well as any type.

In the early 1980s Adobe developed PostScript and Encapsulated PostScript as graphics languages to created printed results. Along with this they developed Acrobat which produced PDF documents from PostScript. You could think of PDFs as a pre-web universal document. PDF stands for Portable Document Format, meaning you could created a PDF and use it anywhere, on any computer and imagesetter. PDF internal structure is almost the same as a PostScript file except that all functions (DEFs) have to be defined in the header while in PostScript a DEF can be defined and redifined, even with the same name, at any point in the file. I wrote a program which converted AutoCAD files into PostScript in the 1990s. That is also when the web came in and I began my first website, designed to let our beta testers download the newest compile of AutoScript.

Web Pages

Web pages, which are displayed in imitation of printed pages, are just simple text files with "tags" to give commands to the browser on how to display the elements of the page. These tags are a concept borrowed from copy editing markup used by editors for centuries and called HTML - HyperText Markup Language, and using a syntax borrowed from electronic typsetters in SGML (Standard Generalized Markup Language) which in turn comes from IBM's Generalized Markup Language (GML) developed in the mid 1960s as an IBM project to create electronic documents.

My first web pages were created with a plugin in Microsoft Word. I changed that to a programmer's text editor which was more fluid for me, and, to sometimes simplify things, one of the first WYSIWYG HTML editors, PageMill. That product was dropped but I had already moved to HTML-Kit as a primary editor and Dreamweaver (first written by Macromedia and later bought by Adobe). In those days the product was lean and clean because it had to be. Download speeds using modems across phone lines was limited and pages needed to be up and displayed within 3 to 10 (at most) seconds or you could easily lose visitors. Even so, that was speedy for the time. Imagine a not-so-far-ago time, 1810, when "Lady of the Lake" by Walter Scott broke all previous poetry sales records at 25,000 copies. Distribution was speedy for that time.

The original browser for the web was designed not only to display pages but to edit them (worldwideweb, Amaya). Content Management Systems (CMS) are the outgrowth of this through blog and education software. For web page creation and maintenance I still recommend Dreamweaver over any CMS. It you can use Word you can use Dreamweaver. CMS systems are actually yet another learning curve despite the easy-to-use argument for their use. Further, because they are designed to accomodate a large number of options, out of which each site or page uses only a tiny number, CMS pages are massively overloaded with unused code (by about 50 times - about 2% of the download code is active on the page).

HTML files - Anatomy of Text Structures

The primary delivery vehicle for information on the web is the web page. Web pages are text files with embedded tags (such as <i> and </i> for start Italics and end Italics) to format the text and other tags for pulling in images, audio and video as well as animation and interactive controls. Web pages can be very simple or, with developments, very complex with various constructions.

Here are the main parts of a web page and various components which can be added. For our purposes we will consider only those web pages which can stand on their own. Some web pages are created partially filled but to be fully assembled at the web server before being sent to the visitor. Those are pages with embedded programs which operate at the server to create deliverable content (such as search pages) and/or pages which simply include other files to add to the page (such as company headings or footings to standardize all pages on that site).

Before we get to the page contents, a word about HTML "tags." First, HTML stands for HyperText Markup Language. It isn't really a language as such but it is descended from copy markup methods used for centuries to specifiy how typesetters set type for an article. What is in boldface or in Italic or in what font or what font size and so forth. In HTML "tags" handle that job. Almost all tags have a starting tag syntax and an ending tag syntax, for an illustration example:

What looks like this: This is the start of boldface to the end of boldface in a sentence.
Is created like this: This is the <b>start of boldface to the end of boldface</b> in a sentence.

The HTML File

In a similar way, the HTML file has a starting HTML tag and an ending HTML tag showing the start and end of the document. Within the HTML file are a header which is not displayed and a body which is what you see.

<HTML>
<HEAD>
------ this section is not visible but contains information about the document which is read by search engines and which show at the top of the page
------ this also ofter contains scripting and stylesheet code affecting the operation and any interaction on the page
</HEAD>
<BODY>
------ this section, the "body" is what visitors see.
------ Here you will also find tags for images, video and audio. You also find a lot of scripting here and sometimes stylesheets
</BODY>
</HTML>

You can create a page in several ways. Any text editor can be used to write a page. A dedicated HTML editor is probably easier for most people. If you can use a word processor, such as Microsoft's Word then you are more than capable of handling an HTML editor which is usually simpler, such as Adobe's Dreamweaver. Once written you need a method to get the page online. In the past that has meant FTP (file transfer protocol). Any web developer had a working website on their own computer (called a dev box) and a copy of that local site on the website online. In more recent years more and more work is purely a creation on the web, mostly with CMS sites (content management systems). Nothing is on the local machine.

Additions to the page called by the page - three major sources of code bloat

Stylesheets - CSS

From the very first, each browser used a default method for setting text styles (font, bold, Italic, color, etc) when showing a web page. That was built into a browser as a default and is still user adjustable though almost no one ever does so anymore. The default styles could be extended with stylesheets modeled after stylesheets in word processors. Before browsers, word processors had a stylesheet to determine the way text looked. The stylesheet was an electronic version of a stylebook and each application had its own stylesheet syntax. The very early Viola browser had its own stylesheet format. Stylesheets, in turn, were modeled after the stylebooks used for centuries by paper publications to guide typesetting and layout.

Fonts and layout is not all that CSS (cascading style sheet) files can do. They have been extended to control how various controls are utilized as menus and other items. Further, layouts themselves have become very flexible, as controlled by CSS files. Because there are often multiple CSS files used, each with more definitions than needed, rather than a custom-written CSS file with only those definitions needed, we get more bloat.

Scripts - Javascript

Javascript is a scripting (programming) language used to setup a variety of functions on the page from selective downloads, message displays and page interactivity with controls for forms and for media play. This could be one of the most compact forms of in-page programming but because of massive working-code libraries, such as JQuery, javascript has become a major source of bloat. If you are programmer enough to connect to a function in a library such as JQuery, you should be a good enough programmer to write your own piece of code for that function, perhaps based on looking at the library and adapting what codes works for you.

Fonts - Downloadable - assuring the appearance of fonts

Before downloadable fonts existed for the web, the only way to get special fonts for a desired appearance was to created an image, usually in a GIF file, of the text in the desired font. Otherwise the appearance of a page at the visitor's (client) site depended on whether they had the fonts you called for as a designer. If not, another font would be substituted by default. That problem was solved with downloadable fonts, but at the expense of a lot of extra code in font files downloaded with each page, taking more bandwidth.

While downloading your own specified fonts gives you more control over the look of the page, satisfying designers and looking great at the company board meeting, earning backpats for your supervisor, it also leads to a lot of bloat for the few characters (usually just headlines) that use the downloaded fonts. Fonts are seldom used for body text. They are almost always used for heds. In the page-size inspector sample below, fonts occuply 178kb file size. The headlines probably don't amount to more than a few hundred bytes. But even if the headlines accounted for 1kb of file size, it would still be 178 times more download than what is used on the page.

 

Page Formatting and Device Targeting

  • electronic billboards
  • desktop monitors
  • tablets
  • smart phones
  • smart watches
  • point of sale (POS) checkout displays

Expected screen sizes and dimensions started with 640x480 monitor screens and moved to larger pixel dimensions of 1920x1080, then to small devices: phones and tablets with rotatable dimensions and even smartwatches and cloud-based POS (point of sale) screens at the checkout lane. And those are just pixel dimensions. The actual screen size in inches or centimeters differs widely depending on dot pitch, the size of individual pixels on that screen. That 1920x1080 pixels is the native resolution of the 32-inch monitor I am typing this on. 1920x1080 is also the native resolution of my 5-inch Samsung smartphone. That's a size difference of 6.4 times between devices. It would be easier to develop applications which could adjust to physical size in inches or centimeters except that the device does't return that information to the server, only pixel dimensions. So, the designer has to guess, even today.

That said there are various methods to tailoring your output to match the device. I'm not going to go into detail, just a listing.

  • degrade gracefully - the original way that web pages adapted, on the fly, internal formatting to any change in window size or screen size (still current but added to)
  • adaptive design - pages with different designs for various devices and window sizes delivered from the server based on information returned from the client's page request.
  • responsive design - pages using internal page scripting at the client (visitor) and styles (CSS) to change formatting arrangements, on the fly, when a window size changes.
  • semantic design - coding to show content based on context and document ranking in the code. More like a newer interpretation of ancient tags. Sorry, kids, been here before. Glad you have a cute new name for it.
  • semantic interaction design - without listing technical specs, just call this designing with user understanding and document purpose in mind for clarity, simplicity and ease of use. Supposedly the same thing were should have been doing all along, from the beginning, but seemingly forgotten by "the super-tech kids" and now showing up with a catchy name which can scare the non-techy (almost scared me - ha).

As you noticed, I did an eye roll on the last two items. Just another version of what was always expected of good design, which has been left in the dust with the new wiz kids who now think they've discovered something equally new, especially because it has a lot of technical explanations. Similar to writing a ten-page law which boils down to "Don't walk on the lawn."

Page Weight and Energy Waste

Today's web pages, for various reasons are massively overcoded. Badly bloated with mostly unused code. This, I have to admit, is a pet peeve with me. Compact code is in my coding makeup. In the page size report below the total page load is 56 times the actual HTML text. That means the informational content (the readable body text) is only 1.8% of all the other code and "dependencies" which make up the page (CSS, javascript, download fonts), not counting the images. Think of it like a stack of 100 sheets of paper, all blank except for 1.8 pages worth of printed text.

On the environmental front this uses 56 times more energy to download each page than it needs to. I recently saw an article complaining that email was taking too much energy. The article failed to mention either web page overcoding or Facebook and Twitter usage.

In the "old" (less wasteful, more compact) days the full page was best limited to around 30kb or so. That was largely because of modem speeds which limited the amount of information that could be downloaded for a page. The general rule of thumb was that a page should download in 3 seconds or less. Here the text portion (labeled "Document") is 35kb but this also includes lines of document code required to call the other components on the page, scripts, stylesheets, images and fonts. Waste. Bloat.

The size of any javascript and stylesheets should be no more than a few kbs. You could easily handle most of the needs of a page within 3-4 kb or less in most cases. And frankly, most of the code in the javascript goes unused. Anyone with enough code savvy to deploy a typical jquery file (library) should be able to write their own javascript for the small number of calls actually used. The same for the amount of stylesheet code. Less than one kb worth of CSS is really needed for most pages, if that. Then there are downloadable fonts which are a "feature" of the current web, more bloat which is used selectively. Generally the downloadable fonts are used for small sections of the document, usually headlines, and are used just because the designer wants a particular look beyond the "safe" fonts that can be expected across all computers. Cosmetics, worse, cosmetics which are little noticed. Even the images are often far too large. This was a continuous problem all along with people not understanding how to make small-size image files for easy and quick download. With high bandwidth hiding wasteful sizes this problem has only gotten worse.

Page-Size Inspector Sample

Here is a typical page tally, from a browser plug in called Page Size Inspector (available for Chrome), which lists each file called and its size in bytes. I've modified its format to a three-column table. For each type of file (script, stylesheet, etcetera) the total bytes in those files are listed first then each file of that file type is listed with that file's size in bytes. What to look for is the section with all the content, the Document section (I've highlighted the line in red). The Document section is the HTML with the information (the body text you see on the page). In this case there are 35,353 bytes (34.5 kb). Compare that to the full page weight of 1,982,345 bytes (1.9 mb). Note that all the information you came to the page for is 1982345/35353 = 56.07 or, in other words, the page weight overhead is 56 times larger than the actual informational content (or the content is only 1.78% of the total download size). Gross bloat, and all too common. Even with some of that code working for formatting and operation we are still looking at active code being around 2% of the full download. 50 times the need. That is a lot of wasted energy in electricity. And a lot of pages are far worse.

URL: https://www.poorpeoplescampaign.org/about/
TOTAL 54 files 1,982,345
Document 1 file (this is the content, the body text) 35,353
URL+www.poorpeoplescampaign.org/about/35,353
     
Type File File Name / number of files Size in Bytes
Script 17 files 905,056
  +www.poorpeoplescampaign.org/.../jquery.js?ver96,873
  +www.poorpeoplescampaign.org/.../jquery-migrat10,056
  +www.poorpeoplescampaign.org/.../earch-filter-66,166
  +www.poorpeoplescampaign.org/.../select2.min.j66,606
  +unpkg.com/leaflet@1.3.4/dist/leaflet.js140,468
  +unpkg.com/leaflet.../leaflet.markercluster-sr80,427
  +www.poorpeoplescampaign.org/.../cff-scripts.j187,827
  +www.poorpeoplescampaign.org/.../jquery.blockU9,566
  +www.poorpeoplescampaign.org/.../add-to-cart.m2,750
  +www.poorpeoplescampaign.org/.../js.cookie.min1,846
  +www.poorpeoplescampaign.org/.../woocommerce.m1,472
  +www.poorpeoplescampaign.org/.../cart-fragment2,940
  +www.poorpeoplescampaign.org/.../core.min.js?v3,931
  +www.poorpeoplescampaign.org/.../datepicker.mi36,380
  +www.poorpeoplescampaign.org/w.../main_a65f8e1182,483
  +www.poorpeoplescampaign.org/.../wp-embed.min.1,399
  +www.poorpeoplescampaign.org/.../wp-emoji-rele13,866
  -data:application/javascript;base64,dmFyIHVyY2382
Stylesheets 16 files780,985
  +www.poorpeoplescampaign.org/.../gtranslate-st693
  +www.poorpeoplescampaign.org/.../common-skelet22,198
  +www.poorpeoplescampaign.org/.../tooltip.min.c1,635
  +www.poorpeoplescampaign.org/.../style.min.css41,467
  +www.poorpeoplescampaign.org/.../style.css?ver30,440
  +www.poorpeoplescampaign.org/.../cff-style.css90,080
  +maxcdn.bootstrapcdn.com/f.../font-awesome.min31,000
  +www.poorpeoplescampaign.org/.../ctf-styles.mi35,155
  +www.poorpeoplescampaign.org/.../woocommerce-l16,542
  +www.poorpeoplescampaign.org/.../woocommerce.c62,669
  +www.poorpeoplescampaign.org/.../search-filter37,477
  +www.poorpeoplescampaign.org/.../main_a65f8e1c384,585
  +unpkg.com/leaflet@1.3.4/dist/leaflet.css14,106
  +unpkg.com/leaflet.markerclus.../MarkerCluster886
  +www.poorpeoplescampaign.org/.../ocommerce-sma6,758
  +fonts.googleapis.com/0i,700,700i|Oswald:500&d5,294
Images 14 files82,307
  +www.poorpeoplescampaign.org/.../logo-header_515,725
  +www.poorpeoplescampaign.org/.../logo-footer_117,665
  +data:image/png;base64,iVBORw0KGgoAAAANSUhEUgA685
  +data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj01,865
  +data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj03,823
  +data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj01,656
  +data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0971
  +www.poorpeoplescampaign.org/.../header-bg_8b22,395
  +data:image/svg+xml;charset=utf-8,%3Csvg xmlns110
  +www.poorpeoplescampaign.org/wp-conte.../close280
  +www.poorpeoplescampaign.org/wp-conten.../prev1,360
  +www.poorpeoplescampaign.org/wp-conten.../next1,350
  +www.poorpeoplescampaign.org/.../about-bg_aa3f32,363
  +www.poorpeoplescampaign.org/.../footer-bg_b702,059
Fonts 6 files178,644
  +maxcdn.bootstrapcdn.com/.../fontawesome-webfo 77,160
  +fonts.gstatic.com/.../EVItHgc8qDIbSTKq4XkRiUa21,452
  +fonts.gstatic.com/.../Hgc8qDIbSTKq4XkRiUa444222,396
  +fonts.gstatic.com/.../zDREVItHgc8qDIbSTKq4XkR20,320
  +fonts.gstatic.com/.../VItHgc8qDIbSTKq4XkRi2k_20,928
  +fonts.gstatic.com/.../AIjg75cFRf3bXL8LICs18Nv16,388
Servers, ISPs, Authoring and Platforms:
Word Processors, HTML / text editors and CMS (Content Management Systems, also WCMS)

A web server is similar to a restaurant server. You state your order and the server gets the document/food from the disk/kitchen and delivers it to your table/browser.
Depending on the type of request your server will either return a prepared document/food as is or it will assemble the document/food then deliver it.

An HTML document is a text file with formatting added, like a printed magazine page. Anyone able to use a word processor (Microsoft Word is probably the most ubiquitous) has the ability to work with an HTML editor, which you could think of as a word processor for web documents, stored locally, or a CMS interface for direct editing on the web. The content is available as "content repositories" for more than just web pages, although the display code is still in HTML. This gets modified, generally through CSS (stylesheets), for various "platforms" as services for mobile apps on phones, APIs for clients, kiosks, syndication of streaming material and other uses.

In the mid 1990's Microsoft's Word had a plug-in which turned out good HTML documents in a WYSIWIG interface (GUI). Soon another nice HTML editor called PageMill was available but eventually dropped out of the competition. Adobe's Dreamweaver is arguably the oldest HTML WYSIWYG editor and remains very accessible to anyone with word processor skills.

I use Dreamweaver, and still recommend it over any CMS systems. I like having the files complete, on local drives. Usually the argument for CMS is that it is easier than teaching HTML code, such as it is. HTML isn't really that hard but for a direct comparison, you don't have to mess with HTML in Dreamweaver, if you don't want. The skill sets are similar, perhaps not as extensive as a user of Word. If you can handle Word you can use Dreamweaver, and for better, cleaner, more compact code than any CMS.

A CMS system (yes, I can say "system" after the initials without redundant grammer even though the "S" stands for system) is a database containing all the components which go into the web page. When a page is called it is assembled by the CMS server and sent back to the visitor.

At the same time CMS editing usually results in bloat-code pages compared to Dreamweaver. A CMS system has a large number of feature options. Each of those is contained in various software libraries, all of which are generally added to each web page. As a result the chosen features, the ones actually in operation for that web page may represent less than 2% of the code contained in the libraries.

For a generalized library used by CMS's for a large number of clients, any changes need to be added without removing old code. Each library needs to maintain backward compatibility. Each library also has to accomodate each of the template packages for numberous clients in the same library. Bloat is assured. And that means the library packages (such as JQuery) become larger and more unwieldy because no one knows what to cut out, safely.

Names of several CMS systems: WordPress, Adobe's Experience Manager, Drupal (actually a platform in PHP for CMS), Sitecore and others. Add to that "learning" platforms such as Blackboard, Canvas and Moodle.

 

Images

Placing an image in a newspaper or magazine required taking a picture of a photograph printed on paper, through a halftone screen which subdivides the photograph into small areas (halftone dots) for ink. The size of the dot determined how black (larger) or light (smaller) that dot appeared. Getting a printed result which carried white to black values depended heavily on the printer making the screen, on the dot shape, the ink and how much the ink spread into the paper. You and the printer had to know each other. For example in upstate New York I was able to hand in a print which would work on the wall, with full whites, full blacks and all the tones between. In Kansas City I never found anyone who could do that. All the prints for screen in KC I made with a tonal range from a very light gray (instead of white) to a medium dark gray (instead of black) in order to get the same kind of printed result in the newspaper.

Rule Basic - Tweak your image to look best in the output medium (such as a newspaper, or a web page or art gallery wall). You always want the end result to be dependable and consistent.

For some time after digital images were available on imagesetters the screened (analog) image was sharper than the fully digital halftone dots. That changed when the "device resolution" (pixels per inch) increased to smaller and smaller pixels. In those days you got a digital photo by scanning a negative or print. Eventually publishers wanted digital images, especially once digital cameras started taking over.

By the early 2000's almost all pictures for media were shot with digital cameras. Those of us with film cameras put away our wonderful old favorites and embraced the digital realm. I don't know any of us shooting film then who want to go back to film. Although there were, and are, some sweet tonal scales for prints from film that were not available for a long time yet, the clean workflow, the lack of having to clean negatives of dust and spot the prints after, and the immediate delivery on deadline to publishers was a huge bonus. And today, after scanning old film from way back, I know that my Nikon D850 is way beyond even my medium-format film cameras in detail and tonal scale. Just no comparison, not to mention ISO (light-sensitivity rating) capabilities not dreamed of then (for shooting in dark areas).

Before extensive javascript, css and web-downloadable fonts, images were often the largest single part of a page's total file size. For pages with several large images that remains the case but often the images are a minor percentage of the overall page weight.

The images used in a web page, by design, have to be light weight versions of the full size (used for paper prints) original camera files. Otherwise they are simply too massive for downloading in any reasonable time. If you have a camera file you will need to make a smaller version for the web. You have to do this with the image file itself. Although you can set an image on the page, meaning the display size, this does nothing for the file size. If you do have a camera-size file fit to a small area on a web page, the file is downloaded at whatever maximum size it has (meaning a very long time waiting for the page to complete) and when that large-size file reaches the page for display, most of it just gets discarded, as waste pixels.

Technical Image Specs to Know

DPI - dots per inch - DPI has TWO (2) very different meanings for two different measures of resolution.
1 - Older term derived from halftone dots per inch for analog images from film on a printed page. Unless you are in the paper publication business, printing images in halftone, you won't need this. Just remember that for paper publications when referring to halftone images it represents the level of image detail printed on the page.
2 - Any digital image has DPI as an attribute. In this case the "dots" are really pixels and the better term would be "pixels per inch." Sometimes people use "pixels per inch" but the attribute still shows in the file properties as DPI. It is meaningless, even though many customers and publications persist in requiring a DPI of some number, such as 300 dpi, or higher. Early on in the history of digital images DPI provided information (as a printing instruction) about the printed size on paper to any printer an image was sent to. At on time if the image was 300 pixels wide and was created as 300 dpi it would print out as 1-inch wide. However this was soon nonsense as the image could be printed at any size. All you really need to know is the size it will be on paper and whether you have enough pixels to get the print quality wanted at that size on paper.

Size measured in pixels and bytes. - There are two basic sizes you need to keep in mind, a large file for paper prints and a small file for web and email uses.
1) full size - the largest size possible from the camera with any cropping - NOTE: this is too large to use on the web but is used for large prints on paper.
2) web size - a small size, light weight version which can be sent in email or loaded into a web page in a short amount of time. Early web sizes, because of connection speeds (bandwidth) were limited sometimes to 300 pixels max side length with a maximum file size of 6kb to 30kb. There is no set standard for this and each image creator decides on their own specs. A web-size image just has to be small enough to be transported quickly across the internet. With much faster internet speeds, today's web sizes are more like 800-900 pixels max side length or even 1200 with a maximum file size of maybe 100kb.

RAW files (RAW or NEF) - The most detailed files you can get from your camera. These have details you can't see directly on a monitor (or with your eyes) because they record in more than 8-bits (256 tonal levels)

JPEG/JPG files - (Joint Photographic Experts Group) the most important file type in terms of usage. (say: JAY-peg) Best for photographs because of the way compression works in a JPG. These are normally limited to 8-bit images (256 tonal levels) which is the most any display can show and the most our eyes can see.

webp files - still image format using both lossy and lossless compression. Developed by Google from technology acquired with the purchase of On2 Technologies.

GIF files - (Graphic Interchange Format) (say: JIFF) Image files Best for graphic material with a flat color areas. Because GIF files are compressed left to right they can represent an even color across the width of the image with a single pixel which carries an instruction for how many pixels to the right it is to extend. It is an 8-bit format created in 1987 for CompuServe. For a while in the 1990's the patent holder, Unisys had a legal dispute with CompuServe. By 2004 the patents expired and GIF usage was everywhere. In the meantime the legal battle was a prime impetus to develop PNG files.

PNG files - Portable Network Graphics - (say: ping :or list the letters: pee-en-jee) - Usually larger than equivalent JPGs. Includes possible alpha channels (transparency).

SVG files - Scalable Vector Graphics - a vector image format based on XML (Extensible Markup Language) for drawing two-dimensional graphics supporting interactivity.

SWF files - Shockwave Flash - Adobe flash format for multimedia, vector graphics and ActionScripts supporting interactivity. Suffers from backward compatibility.

 

Lighting

For this one I'm going to refer you to a piece I wrote after having yet another way-too-dark show to shoot. From my position as a photographer I usually have little voice in how a set is lit and yet I am expected to turn out a great, professional result which will (always does) get compared to pictures created in studios with all the controls, and high-brightness electronic flashes in predictable poses.

Here's my complaint (whine, if you will [I claim whine rights]): http://www.mikestrongphoto.com/CV_Galleries/LightingVsDarkingForStageCameras.htm

I do have studio lights, though I rarely take them out to use. In most of my work, shooting production on stage, any kind of flash or other lighting from a photographer is stictly forbidden. So, even if I wanted to (I seldom would), I can't. In short I have some of the worst possible shooting conditions: low light with high action, abrupt and constant changes in light level, quality and color so that I am constantly resetting the camera controls to get this (note, NEVER, NEVER ever even attempt to use automatic settings).

High action would normally require the highest shutter speeds and with enough light the smallest apertures to ensure enough depth of field to maintain focus. Well, I am in a lens niche in which I have to buy only the widest lenses (to get the most open apertures which allow more light but at the same time cut my depth of field to a few inches). Depth of field is the range in front of the camera which is in focus. At the same time I can't begin to even think about the 1/1000th or even 1/500 second shutter speed sports photographers use, I feel lucky to get a 1/200 and am more normally shooting 1/160th or less, sometimes a lot less, so I am shooting at the top of the ballistic arc, where I can get the least motion.

So, the few lighting designers who provide decent fill are treasured. In the "whine" piece I talk about my formula for decent lighting and why most lighting designers fail as a matter of procedure. Because I am color blind (partial red/green) where I see most colors as they are but can get lost in some shades of red and green and see some grey as if it had a slight green tint, I am not able to be a lighting designer, at least as far as color goes. But I am very sensitive to both the aliveness of colors and to tonal variations. As a result I have a scheme for lighting, regardless of colors and I include that in the linked piece.

When I do need to color correct an image (which is all the time) I use the numbers for red, green and blue to make that correction. For something I know is a neutral color such as greys I just get all the numbers to match. For skin tones I get the red minus green difference to be the same as the green minus blue difference. Red to lower green to lower blue number, such as 160 red, 130 green and 110 blue. All skin exhibits almost the same RGB distribution. The different colors we see are due to the different concentrations of pigment in the skin, not different pigments.

 

Audio

This gets delivered either on its own, as an audio file, perhaps within a podcast format, or within video. The sources for audio are various from recorded audio on tape or CD or DVD or by recording directly on audio digital recorders. Tape recorders are almost non-existent and any still around are usually used to save old tapes to a digital file.

For equipment buy professional quality but stay away from exotic toys. That will get you going with a solid base of equipment for professonal sound. The toys are great but they will drain your bank account and may drain your attention is time better spent getting experience with solid, consistent, plain old dependable sound.

The primary audio formats to be aware of (there are more, such a OGG, but these predominate):
WAV
MP3
WMA
WEBA

Depending on what you are putting together, mics and recorders will change. If you are shooting video of a stage show, for example, I need to take my audio from radio mics placed on the stage apron (front). The speed of sound is such that taking audio from camera positions will cause the sound to show up late (out of synch) by 1 frame every 35-40 feet at 30 fps. On video you will notice it. In person your brain compensates in some way so that you rarely notice.

If you get a house feed, send it to a recorder and use your own mics and mixers. There is often a problem matching with the house, especially if using students on the audio board. Usually they feed you too hot. It is a good idea to have attenuators in the audio lines to drop the line level if needed or to run them into a small analog mixer you can use at the camera position. I have three which run on main and two small ones which run on either mains or battery.

Here is a very short page on mics and headphones.

Voice delivery and text delivery are similar but not the same. Even so, reading the text out loud will help you spot areas that need correction easier than reading silently. If you can read into a microphone and record it while you also listen on a headphone. You will get a feel for where you need to make audible pauses to make sure the listener is still with you and comprehending. This is also where you will want to add punctuation. Allow your listener (or reader) the chance to register the information you are transmitting.

"Gain Staging"

Every item in the audio chain which passes through an amplifier, from microphone to audio file, is a "stage." The term "gain staging" simply means optimizing each stage in the audio chain for the best final result. Here are several possible items.

Remember always that the machines are merely air-pressure measuring instruments which "hear" the sounds in absolute terms. Our ears hear selectively and hear sounds in relative terms. So, any mix is done in relative levels so they sound on the recording they way they sound to our ears. At the same time the absolute volume is an important stopping point because beyond that point lies distortion in sound and damage to our ears.

Audio instruments you might be using:

  • Microphone
  • XLR or other line or transmitter and receiver for radio mics at various radio bands
  • audio pad / audio attenuator
  • filters
  • pre-amp
  • amplifier
  • mixer
  • EQ channels
  • digital audio recorder
  • turntable and vinyl record or other older forms of audio playback such as audio tapes, reel to reel and cassette
Audio terms very, very briefly:
  • Noise - unwanted sound in signal - such as hums, hiss
  • Noise Floor - the signal level at which the noise is louder than the signal
  • Distortion, Clipping - when a signal overloads a gain stage, & tops of signals cut off
  • Peak Volume - loudest parts of the signal - adjust gain to avoid clipping
  • RMS - (root mean square) average signal level as heard by your ears
  • Nominal Operation - volume just high enough to keep peaks from clipping
  • Headroom - difference between nominal operating level and clipping point
  • Signal to Noise Ratio (S/N) - proportion of the noise floor to nominal level
  • Unity Gain - volume set for neither gain nor cut
Steps to better gain staging:
  • Keep mics as close to sound as possible without overloading them - then, in the mix, remember, your ears will hear the brights and percussions easier than the lows while the mics/amps just react to air pressure as the instruments they are.
  • Mix setup - start each channel at unity (all sliders to 0) and pre-amp at 0, then bring pre-amp up to that you are running at nominal
    • turn up mic pre-amp to get flashing on meter, then turn down by about 15 db
    • turn up low-frequency inputs until about -12 db or until they sound about right (still clear and undistorted) in your headphones
    • then bring up mid and high frequency inputs until they sound right in your headphones, also.
  • EQ (equalization) - better to cut out unwanted frequencies (lowers noise floor, raising S/N ratio [good]) than boosting desired frequencies (raises noise floor, drops S/N ratio [bad]).
  • Outboard gear (external to mixer devices) With compressors only boost makeup volume to previous peak. Remember the more compression the more the floor raises, causing S/N ratio to lower (means more noise per amount of audio).
  • Mix - keep the hottest incoming signal at unity gain (generally percussion, or house feed from a student). Mix the rest of the tracks by lowering levels (raises S/N ratio). Look for a 0 dB peak on the master bus.
  • Power Amp - The last stage, feeding speakers to audience. Keep low, maybe 50% while adjusting the mix to 0 dB peak. Once that is adjusted, increase volume on power amp to where you want to hear it in the room.
Long Play - How we got 33 1/3 rpm for LPs

As an industry standard, 33 1/3 rpm as a speed for records was not dropped on the world until 1948 by Columbia Records after years of research, starting in 1941 and interrupted by WWII. The 33 1/3 rpm speed and the large platter size derives from the first commercially successful sound-film format in 1927, Warner's Vitaphone. Vitaphone used sound on disk, rather than sound on film, for audio and what would become standards for media speeds was a matter not of years of research but of in-the-moment engineering decisions based on existing hardware and practices. Silent film was shot at 60 to 80 feet per minute. Stanley found that better houses projected the silent film at 80-90 feet per minute and the small houses ran, said Watkins, "anything from 100 feet up (per minute), according to how many shows they wanted to get in during the day. After a little thought we settled on 90 feet per minute, a reasonable compromise." (from Scott Eymans' "The Speed of Sound" Simon & Schuster, footnote on page 112 [section 2, chapter 1]): (boldface, Italic and red color, my emphasis.)

That is typical of engineering decisions. Notice, nothing about aesthetics and look is mentioned. Most explanations you find today assume some great aesthetic reasons for the original standard and then work backward coming up with an explanation which sounds plausible, claiming it is all the naturally-resulting aesthetics of 24 fps.

At that time, as noted above, Western Electric's chief engineer Stanley Watkins found out what projection film-footage rates per minutes were being used at the time. He cut to the middle of the varied set of projection speeds with the round number of 90 feet per minute. The film projector mechanism had a direct link to the sound platter. Here is how Watkins explains the decision to use 33 1/3 rpm.

(from Scott Eymans' "The Speed of Sound" Simon & Schuster, footnote on page 112 [section 2, chapter 1]): "We had our disks processed by the commercial record companies," said Watkins, "and the largest diameter they could handle was about 17 inches. With a record of that size the optimum speed to get the 10 minutes of recording time we needed was around 35 revolutions a minute. We standardized at 33 1/3 because that happened to fit best with the gearing arrangement our engineers were working out for coupling the turntable with the picture machine." (again, color, underline, Italic and boldface are my emphasis)

There were a lot of changes in recorded media between then and 1948. Various companies were competing and each one wanted to produce the standard which would take off and secure customers for themselves. Customers just wanted something consistent which wouldn't orphan their purchases.

The material changed from a hard shellac which ground down needles to vinyl, the diameter was reduces to 10 and then up to 12 inches and the wider grooves were reduced to what Columbia called "micro grooves." Also, LP's, like other records tracked the needle from the outside to the inside. The Vitaphone cinema projectors' needles tracked from the inside of the record to the outside. That was because by the time the record finished the 10 minutes for the reel the hard material ground down the needle so it was less sensitive. On the Vitaphone disks the outside part of the tracks had was the least sensitive. Also, after so many plays the record had to be replaced.

You can read more details here, although they don't have anything about Watkins the wikipedia page does refer to Vitaphone. The wikipedia page authors also seem to make some assumptions about film reel sizes, assuming 11 minute rather than 10 minute reels and 1,000 feet of film in a reel instead of the 900 feet plus leader for 10 minutes at 90 feet per minute. Despite these reservations, and a few more I came across but didn't run down, about the wikipedia source, overall I'm listing these two links so you can get a sense of the numerous attempts which failed to gain traction for 21 years before the 1948 specification.
Here are those links:
LPs and 33 1/3 rpm: https://en.wikipedia.org/wiki/LP_record
Vitaphone - Warner: https://en.wikipedia.org/wiki/Vitaphone

 

Video

Video (for the subject)

I will give you a movie-viewing assignment shortly, First, you need to know what to look for. This is about delivering information. To do that you need to have enough time with each scene to register content and then you need further time to let meaning seep in. Otherwise you have montage, a series of short images run together to form an impression, a sense of something, but that's all. Most editing seems constrained to a 3-second (or less) rule of cut, cut, cut, cut. Their overuse diminishes their impact and usefulness.

The choice of tools (cameras, effects, edits or even paint brushes) can either make your work easier (the best tools) or it will just get in the way of the work. For instance:

Two simple camera-choice examples: working with an optical viewfinder rather than an electronic view finder or using continous drive (don't ever do that) versus single drive.
1) The optical viewfinder is direct and on time for action while an electronic viewfinder is always behind. Wa-a-a-y behind for anything as fast moving and exact as dance. But for things that don't move fast, such as grip and grin shooting the electronic viewfinder is an absolutely fantastic, can't say enough about it, godsend.
2) Continuous drive is a spray and pray method leaving you with a ton of unusable frames causing a lot of extra work as well as glazed over eyes (not kidding) causing you to miss good choices and/or choose badly and, continouts drive is all but certain to never get the right shot in dance, or even a sitting portrait because, in both cases, the exact best moment to shoot will almost certainly fall somewhere away from the clock timer (intervalometer) used to trigger the shots. The timer cannot listen to music and cannot watch the scene for that exact, split-second moment.

Nikon MF-2 750-exposure bulk-film backContinuous is the digital version of motor drive. When I started the first motor drives were being used remotely on pre-fixed and pre-focused cameras, using radio-control triggers, by Sports Illustrated photographers. These were 35mm SLR cameras with a large magazine attached, usually either 250 or 750 exposures. They had a special cartridge you loaded from bulk film spools.

The common bulk film size was 30 meters (100 feet). 100 feet of 35mm film in a magazine would give the shooter 750 exposures before the film was done and needed changing. Outfits like Sports Illustrated had the money to process it and might choose at most a handfull of shots. Most of the shots chosen were from the cameras operated directly by the photographer holding the camera.

Motor drives had to transport film so they had a characteristic sound of clack, click, and whizzz (mirror, shutter, film). Continous has no film so you just get a clack and a click. The M-2 magazine in the picture holds 100 feet (750 exposures of 24mm x 36mm). This camera is fixed for close-up work with a ring light on front of a macro lens and a vertical eyepiece looking straight down.

For an exercise, as you watch television, or movies, try to remember what the details are in each clip you are shown, as each is shown. Can you remember the signage behind the main character? How about the clothing? Who is each actor or subject? What are the details? Or, for that matter, on the news, for each infographic shown can you even get to the details to examine them or are they pulled away in an instant. I can guarantee you who ever put it together took longer than the 3 seconds it was shown and the reporters took far more effort than the 30-second story had to live on air before it was replaced by the next story. Or the weather and traffic. Did you really get enough time to determine where those driving conditions were and whether you are headed there? Could you pass a simple quizz even a few seconds after?

We have an enormous quantity of media thrown at us and almost none of it sticks as anything but a fuzzy impresssion. So:
1) why are reporters (or "reporters"?) putting out so much effort on stories when they throw it all away on what becomes a quick impression of having done some work, and which no one can remember enough specifics after the newscast to pass a decent quiz.
2) what idiot in purchasing buys those oblique-story/message ads which manage to obsure-by-oh-so-cleverness who they are advertising for until the very end when the company logo gets displayed (does no one at the top of those companies check to see whether the ads really bring in money for all the money spent making them)?

Four ground rules for shooting and editing:
1 - (shooting) Always support the performer. Let the performer's performance work. Never substitute editing and snappy camera angles for performances
2 - (shooting) Always frame close to whatever is happening. Never crop arbitrarily by category, such as close ups, head shots, medium shots, etcetera. Let the action dictate the frame, not the other way around.
3 - (shooting and editing) Don't get too fancy. Any extra movement, pans, zooms, moving or jiggling text, quick edits, transitions and so forth, cause your brain and eyes to lose focus until everything is clear again. That literally strains your eyes (eye muscles trying to focus) giving you a headache if there is enough of it, and causes your brain to try to catch up and make sense of the scene when it clears long enough.
4 - (editing) Allow each scene enough time between cuts to let the viewer register detail and content. Don't rush it. Again, avoid gimmicks. Fancy will kill the mood (and comprehension and retention).
5 - (narrative) - It is easier to discern the words for talking heads because you can see the lips (we use lips more than we realize to understand voiced information) versus voice overs which need to be very clear and deliberate because you can't see the speaker's lips and because whatever the video is that is being talked over is, itself, distracting from the voiced narration.

"Assignment:" three movies to view:
1 - "Cats" (late 2019)
2 - Any Gene Kelly directed dance movie
3 - "Christmas in Connecticut"
4 - and (extra) any thing by Alfred Hitchcock, film or television, see how he can horrify and engage by implication rather than overt effects.

What to Look For in those movies:

1 - "Cats": Criticized heavily, but why? Of all the comments and critiques of the "Cats" movie I was surprised that none of them called out the horrible choppy editing (I'm calling this "chop editing"). That says something, not just about the movie, but also about audience conditioning and about the editors who've let this kind of editing practice grow. "Cats" (the movie) is a good movie to study for what not to do and what could have been with the talent who were cast. Today we have a huge amount of media which is all about creating a fuzzy impression, never to detailed or content focused, rather than delivering information in a way that it can be taken in, absorbed and remembered.

Watch most any television show from the last 20+ years and there seems to be a 3-second or so rule that no camera is on an actor for more than 3 seconds and further, no dialog for any character lasts longer than 3 seconds before the next character takes up the sentence for 3 seconds which then goes to another character for their 3 seconds and so forth. This is supposed to be snappy. It is distancing and a bit disorienting. (see "Christmas in Connecticut" to see the right way, which pulls you in and keeps you in.)

In "Cats," 3 seconds is often the longer shot length before you get a cut. Less than 1-second cuts are all over the place. Every time you begin to get a latch on a character the editing shows up with another cut to yet another camera angle and you lose all connection. Imagine being in a social situation and you just start talking with that cute person you finally got close to and both of you are clearly interested when someone steps into your face, turns you around and marches you across the room, repeatedly. Can you imagine your irritation?

That's the central problem with "Cats." Our main protagonist, our naif who is the cat in a bag dumped out of a car is very appealing. She projects just the right sense of innocence and hope for empathy and connection. But just as you start to connect, we are yanked away, rudely, to another camera angle, sometimes close and sometimes from a distance.

I was surprised to see James Corden in this. I've seen him on television on his show but usually tuned past the channel. This time I saw him in a song and dance role and am thinking, "Wow, good for you, James!" as he comes down the street, into the camera at sight line, with flanking actors in a line across, like "On the Town," and I start settling back in my theater seat for this somewhat longer than 3-second shot to enjoy the bit when, "SMACK" - another edit cut, this time to a far view and then a series of views in further chop cuts, each rapid.

What does "Cats" get right? A lot, including some items for which "Cats" is criticized. One of the first things to look at is behind the characters in front. All the characters behind are clearly projecting their character's very own storyline. They are not just standing around or occupying stage locations. They are actively "in" their characters, no matter how small. This is far more important to the feel of any picture (still or movie) than you might think. I've had to throw away lots of otherwise "good" dance pictures because the dancer was clearly thinking "what is my next move" on her face instead of projecting their character.

The dance is top level, casting dancers from the Royal Ballet in London. Yet, the framing often chops off feet and legs from performers who are all about feet and legs and the constant cuts and camera angle changes make it hard to appreciate, or even fully see, the dancing. The tap dancer (again, really superb) starts out with feet then full body and then we lose the feet, even legs on the tap dancer. Tap is all about footwork and about the sound (tap shoes count as a percussion instrument in the music).

The CGI fur ("digital fur"") on the performers is fabulous. It is also criticized heavily. I'm not sure why, unless the choppy editing is so irritating that something has to get picked on and people are so used to "chop editing" that the editing is not recognized as the overwhelming culprit, to the fur gets picked on and then everyone else just piles on.

And one last item (good and not so good), Idris Elba, totally surprised me. I've been a big fan for years but he's always been in some intense role, such as a cop. Here Idris Elba just chewed up the screen and he was delightful as the villain. So Idris was a great choice, but there wasn't enough of his character's ominous presence to balance the rest of the cast. We are left floating a bit because we need our "heavy" as a counter weight to anchor the story, such as it is. Perhaps this is also because the "story" here is minimal and mostly suggested. We could use a clearer arch or just go back to the stage version and give us a cat version of "Chorus Line."

2 - Dance movies directed by Gene Kelly such as "On The Town" or "Singin' in the Rain." In Kelly's movies you can see the camera working to capture "what is going on." When someone is dancing you will see the camera frame covering the entire dancer or group. No legs or arms cut off. No closeups which lose what the performers are doing. Dance takes all of the person. "Dancers and feet" are like "horse and carriage." Never cut off one from the other. When those same dancers are just talking the frame trims to cover just the heads and torsos or just the heads, in other words "what is going on" is the conversation and that is between talking top sides. After all these decades, Gene Kelly remains a master. His work remains fresh, and aside from period fashions and physical settings (such as car models), does not feel dated.

3 - "Christmas in Connecticut" - (1945) - Two items to be aware of: 1) story development in which each part of the movie builds from earlier parts - rather than standing around for joke lines the humor, and fun comes from plot development, building as it goes and 2) framing and shot length allows the viewer time and area to connect with the character, the actor performing and through them the story. Compare this with "Cats" and you will see how editing which tries to be clever, kills performances and story lines. By contrast, the framing and editing in "Christmas in Connecticut" support the performers and through the performers, the story. This is "ordinary brilliance" on display.

0 - Others - For a more accessible set of examples for story development which builds on each previous moment watch "The Dick van Dyke" show on MeTV. While you will also see one-liner shots, usually from the office scenes of the writers, the entire story works by building moments. No one is standing around doing standup and waiting for an applause sign before going on. The Dick van Dyke Show is situational writing, living up to the term "sit com" - situation comedy.

For great examples of building mood into a story arch, telling a story and being able to induce cringe responses without explicit graphical gore, watch anything by Alfred Hitchcock.

 

Video (as a service - i.e. streaming)

It has been apparent for some years that streaming is redefining ownership. Whoever owns the streaming source is the owner and that may not be the originator, not even for personal videos. "Platforms" which contain content on their storage, such as streaming, own your life. This is even more ominous than it sounds. The entire idea of a "cloud," of a "One Drive" as the system default location and its equivalent means that your own images, text, video are on someone else's server (their computers at their locations) and you "own" it only as long as you keep payments going. Good luck downloading it to your local devices, assuming you still know how (that is a basic computer skill which seems to be increasingly overlooked in training and in value).

For broadcast television cable is a losing proposition. Its days seem numbered. Certainly cable's numbers have been in steady decline since at least 2009. Cable became a money drop as it overpaid for programs, especially sports, causing customers to have to pay more which in turn drove them to streaming, satellite or back to over-the-air (antenna) services.

Notice how all that is coming in is "owned" by large corporations and not by individuals. This is about ownership and control. When you look back at media creation and ownership you will find the technologies of the times are used to control the product, its distribution and its ownership, each in its own time. We still print paper tickets (a per-view control and authentication device) but now we have other box-office controls. Gate keepers have always been there but at the theater or other venue.

This time the control measures and "ticket buying and collecting" are on our "own" devices including the phones in our hands which themselves don't really belong to us, not really fully. And this time instead of buy a ticket at the box office, we submit our most personal information, unseen, to an unseen presence behind the small screen we hold in our hand, which is really a tracking device with access to us beyond the, no doubt envious, imagination and capability of the most repressive police states in history. Now, instead, "we" line up to be squeezed by an ever tightening "velvet fist" that we purchase with irrepressible (the irony) eagerness.

 

Note: The remainder is retained from the original page. The text above was added after this archive for my online lessons was created.
And the in-page self-review quiz was originally based on the material below. The self-review quiz remains on this page as an example.

When I created the archive I decided to change the means of delivery. The original was written using ASP which requires delivery from a web server. I wanted the archive to be able to be delivered directly from files on disk so that I could add them to a CD/DVD disk or USB to send as part of a resume. When started in the "default.htm" file, this delivers material in a way which operates and looks almost exactly the same as the server-based version. It uses javascript to emulate the basic functionality of the ASP scripts on the server.

Title Slides and Slide Shows


Slides mounted in Carousel slide trays for a slide projector.

16-millimeter movie projector


A "Panoram" playing a "Soundie" film - 1940 to 1946 - a jukebox showing three-minute films, usually musicals.

 

This picture shows advanced multimedia in the late 1980s - banks of 35mm slide projectors controlled by an audio-tape program. The banks of rack-mounted Kodak Carousel slide projectors shown here gave changing and overlapping images across a large area. The 10-inch reel to reel tape recorder could hold music, narration and slide cue tones. The computers, clearly running text screens, were the precursors of today's graphical user interface (GUI) editors, control programs and the now ubiquitous PowerPoint.

In August of 1987, the same year this picture was taken, Microsoft paid $14-million for Forethought, Inc., the makers of "PowerPoint." Powerpoint would, in a few very short years, put all of these banks of projectors out of business, stored away in back rooms or thrown out as junk.

In the late 1980s Harvard Graphics had 70% of the market for the first consumer program to create title slides on the computer, without film. You could add text, graphics and charts to DOS-based slide shows. When windows came out Powerpoint took over and Harvard Graphics simply fell behind, but they are still going at http://www.harvardgraphics.com and offer a Windows-based line of tools.

Addendum: Harvard Graphics was introduced in 1986 by Software Publishing Corporation as Harvard Presentation Graphics. Harvard Graphics was taken off the market in 2017. Here is a link to the wikipedia page for them:
https://en.wikipedia.org/wiki/Harvard_Graphics


The originator of this picture (and donator to the public domain) states: "Partial setup for programming of screen control for the July 1987 Ford New Car Announcement Show, Detroit, MI. From left: Brad Smith, art director; Sung Soo Lee, creative director/producer; Bob Kassal, executive producer; Paul Jackson, programmer/producer" - photo (c/gpl) by MoProducer

For example, Powerpoint.

This is a program purchased by and distributed by Microsoft which has become a universal presence. Yet not until the early 1980s did the first computer-generated "title slides" emerge. Harvard Graphics was the consumer program to buy at that time (I had it). It worked on DOS-based computers and was the Powerpoint of the time. When Windows 3.1 was introduced Powerpoint dominated.

Before computers produced title slides the titles were produced on what were essentially copy-camera arrangements with a color-enlarger head as a light table at the bottom. I did this work at the time which lasted into the mid to late 1980's until computer generated title slides worked as well as slides on film. These were precision cameras with pin-registration for 35mm film so that the film could be exposed for several frames for part of the slide image (usually a color) then rolled back to the exact same position (in registration) and exposed again for each next part of the slide image.

The work flow started with setting type onto litho film and copying illustrations in litho such as charts and graphics. These were placed on the base of the machine, the base being a light table (an upside down color-enlarger head). Each color was dialed in by turning in various amounts of cyan, magenta and yellow filters, then a photograph was taken by the camera in the copy-camera position. The next color required another sheet of litho film, turnign color dials and another exposure on the same frame. If more than one copy of this title slide was to be made, then more frames were exposed for each color and when each color was exposed the film was rolled back to the starting frame so that the process could be repeated for each color until each slide was done.

I've forgotten the company which first produced title slides on computer for commercial title slide companies. They had some "jaggies" in the image and the slides were produced on a special color-monitor which was then photographed on 35mm film. This lasted several years. There were not yet any digital projectors. While you could show these "slides" on a computer monitor, they were not large enough for a room, so any "title slides" (in quotes because that is a term you don't hear much anymore) still needed to be on 35mm slide film so they could be projected to a large screen or printed as large transparencies for overhead projectors.

Microsoft's Powerpoint began life on the Macintosh as "Presenter" then was renamed to "Powerpoint" because of trademark problems. Powerpoint 1.0 was released in 1987. That first year it could only do black and white, no color and it created overhead projector transparencies, not 35mm slides. In August of 1987 Microsoft purchased Forethought, Inc, the makers of the program, for $14-million. It wasn't obvious then but it was both the high point and the end of the road for banks of 35mm projectors as shown in the picture above.

Quoted from the New York Times:
http://www.nytimes.com/1987/07/31/business/company-news-microsoft-buys-software-unit.html

COMPANY NEWS; Microsoft Buys Software Unit
Special to the New York Times
Published: July 31, 1987

The Microsoft Corporation announced its first significant software acquisition today, paying $14 million for Forethought Inc. of Sunnyvale, Calif.

Forethought makes a program called Powerpoint that allows users of Apple Macintosh computers to make overhead transparencies or flip charts. Some industry officials think such ''desktop presentations'' have the potential to be as big a market as ''desktop publishing,'' which involves using computers to lay out newsletters and other publications. Microsoft is already the leading software supplier for the Macintosh.

(Note: Desktop publishing wiped out most typesetting by the late 1980s. Kansas City used to be one of the largest printing and typesetting hubs in the US because of our central location. In turn, what is refered to above in the NYT article as "desktop presentations" pretty much wiped out desktop publishing as a major type of office software, though it remains, mostly in niches within companies or for publications.)

Today's new media are changing the entire economic model for television and movies, just as the economic model for music has already changed. Often the new media on the scene, instead of wrecking the older media, provided an opportunity to sell the old media through the newest media.

  • Movies competed with and displaced stage shows but they also bought scripts previously staged.
  • Television competed with and displaced movies.
  • Then television became a major re-marketer for movies otherwise retired to the vaults.
  • VHS tape seemed to threaten movies but provided a further market.
  • When DVDs came along, the new DVD media provided a second shot at marketing the same movies previously marketed as VHS tapes.
  • Blu-Ray is trying the same thing with its discs but isn't making much headway because it is late and expensive, being bypassed by streaming. Essentially, Blu-Ray is the equivalent development point for optical disks as banks of slide projectors were for corporate presentations in the late 1980's. One major difference, Blu-Ray can be played on home equipment while those banks of slide projectors were major equipment investments along with the crews to operate them at each showing.
  • Downloading (i.e Netflix, etc) is providing yet another market as bandwidths become larger and compression better.
  • Website video delivery - universally available in terms of delivery mechanism - the web in any browser
  • Apps: These are applications (programs) each accessing a single website or company and running on specific machines or operating systems - which unlike webpage plugins and browsers must be purchased individually for access to each "site."
  • Streaming for large events, institutions and small-scale events is possible from any device, from one person to a production unit.

At the start, websites became the defacto location for delivering media, stills, audio, video. The advantage here is hugely for the public at large, the commons. Regardless of the type of computer and operating system any website can be viewed and files looked at using any browser which runs on that machine. This is regardless of whether the computer is running Unix, Linux, Windows, WebOS, OS-X (Mac), iOS (iPhone), iPad, other smart phone, Android, etcetera. HTML and the web provide a single way that all these disperate machines can present material to the world and can view material from around the world. Apple's use of apps changes that, massively.

For the first time since 1991 when the web was being introduced to the world we are headed back to the late 70s and early 80s when numerous incompatible operating systems and machines competed for the same market. Most lost out. In addition, the first dial-up services were separate entry points for which you paid to gain access, shop and view articles. You used a modem to dial out and make contact with another computer. (modem == Modulater/Demodulater). Most took one line at a time. Some used MUXing (multiplexing) to run more than one computer through a single phone line into more than one connection at the other end. Now, in the form of streaming outlets and social media, that assortment of methods is being repeated, with a few differences to a mainly new and younger market which doesn't remember the pains of the early 1980s and trying to make incompatible systems work together. I won't enumerate them but the best thing Microsoft did for computing at the time was to bring most of the low end and commercial computer world a set of consistent standards so that equipment from various manufacturers worked together, and at low cost. I could afford to make my own boxes, which I did and still do.

When you use a browser to go to web sites, you don't have to purchase a different browser for each website. When you get to the site you don't have to purchase the connection. But when you purchase an app, you, in essence, purchase direct access to the equivalent of a specific website (one URL's worth). You just can't go to anyone else's website to see what they may be selling. If you want another service instead of typing in another URL into the address bar of your browser, you now purchase another app to get that service or product. Apps are handy and they do go directly to some location but they also constrict you to shopping only at their store, unless you use a web browser on the machine. To me, this is going backward to a time I've already lived through, only then most machines people used were desktop computers.

Uhura on Star Trek with PADD 3 Pads on Star Trek

Directly above Nyota Uhuru (Nichelle Nichols) in the original Star Trek series (TOS, 79 episodes 29/26/24 from September 1966 - June 1969) with her hand on a tablet (called a PADD for Personal Access Display Device). Above to the right three Star Trek tablets together with an "Ops Manual" - another tablet, in its case, Useful for engineering specs, entertainment, reading, ships logs and even controlling the ship remotely. These were non-functional props, but they were real concepts.

From Memory Alpha (TNG=The Next Generation): "According to frequent background performer Guy Vardaman, TNG extras often referred to the PADD props they carried down corridors as "hall passes."

On the right is a Rand computer with a tablet input stylus and a CRT screen. This is from the early 1960s and was attached to a mainframe computer.

On the bottom right is a Radio Shack Model 100 (1983) which ran 20 hours on 4 AA batteries, was very light and had a display with 8 lines of text. It had an internal 300-baud modem, an RS-232C port, a Centronics parallel printer port. It came with BASIC programming language, a word processor, a spread sheet, a database and a modem program to communicate over the phone line. It could store from 8kb to 32 kb of information. I loved mine. One day the keyboard died and I could never get a replacement I could afford (more than what the computer cost me).

Directly below is a tablet computer from 2001 running Windows XP. Microsoft set up the specs and touchscreen drivers and other companies (such as Hewlett-Packard) made the devices.

Rand Tablet early 1960s

Windows XP tablet from 2001

Radio Shack Model-100

When the iPad was announced there was a direct line of creative influence from Alan Kay's "Dynabook" although in appearance the overall machine, with its physical keyboard at the bottom, looks more like a Blackberry, just bigger. The actual screen area of Kay's Dynabook corresponds very closely to the iPad.

Click here for Kay's 1972 paper on the Dynabook. It is so forward thinking you will wonder when the iPad will catch up. Jobs and Kay are friends and after the iPhone came out Kay, responding to a question from Jobs, told him that if he did the same thing in a size at least 5x8 inches he would own the world. A little later we get the iPad.

Above Left: 1968 Mockup of the Dynabook

Above right: Kay's drawing of Dynabook as he envisioned it in 1972. (1972 paper)

Bottom left: Alan Kay holding the 1968 mockup of his Dynabook concept.

Bottom Right: Kay's concept of a Dynabook in every child's hands. (1972 paper)

  • Computers were originally producers of content.
  • Today, computer technology is split between
    • producers of content and
    • mostly consumers of content who use computing devices for access to news, entertainment and other types of information.

To a lesser extent the tablet form is also usable as a production device but currently hampered by input devices (such as visual-touch keyboard rather than hard-key touch, for faster typing and for the visually impaired), working memory, storage, slow central processing units and so forth. However, this history of new, minimal devices and software is that complexity is added back in because certain utility is desired.

So, it is reasonable to expect these input and storage limitations to be overcome in the future in order to have a more useful tool for many uses and to increase in size to something more usable for graphic arts. In the Windows desktop arena, Hewlett-Packard introduced a 20-23-inch (diagonal measure) touch screen desktop computer with a physical arrangement much like the iMac. Clearly someone is looking at a workstation rather than a newstand and party line.

HP 23-inch Touchsmart   Nicelle Nicholes 1966 Star Trek first mouse underneath
Touchscreen 2011 Hewlett Packard   Nichelle Nichols with PADD, 1966 Star Trek Underside of the first mouse Ad showing Isaac Asimov advertising Radio Shack's pocket computer (made by sharp) at $169.95 - 1.5kb memory, late 1970's

Now we have new distribution mechanisms, downloads via the internet (both through browsers and through native applications such as apps) and Blu-Ray discs. Blu-Ray is trying to repeat the DVD success at re-marketing all the previously re-marketed movies. But Blu-Ray is already too late and way too expensive to fully repeat the success of DVDs. Online downloads are taking up the slack.

For the last five years movie attendance has dropped. In 2008 movie attendance dropped by 5-percent. Prime time television, has been dropping for many years (from the early 1980's) but only one or two percent a year. In 2008 prime-time TV dropped 8-percent. DVD use grew until in 2002 DVD use passed VHS use. But 2006 DVDs reached a limit.

In 2006 DVD usage peaked and has been dropping steadily since. The barely born Blu-Ray is not the reason. Blu-Ray may still die in infancy or at least find its growth severely stunted. Consumers are moving to online digital services such as online movie rentals from Netflix. These are downloads directly to HDTVs and other player devices.

The recession of 2008 started with the fourth quarter of 2007. Almost every industry reported large losses either for the entire year of 2008 or for the last quarter of 2008. Yet Netflix's revenue, in that same last quarter of 2008 was up 19 percent In 2010 NetFlix took a big loss when they decided to divide their online and postal business into two businesses with different rules and prices, including a 60-percent price hike on subscriptions. They quickly lost thousands of customers (800,000 as of 24 October 2011). They reversed themselves but were still picking themselves off the floor a year later. By late October 2012 Dow Jones reported that NFLX earnings declined by 88% (though they still made a profit of $7.7-million) and share prices dropped 15% (to $58).

In the meantime Netflix started to originate content for their own distribution via streaming on the internet. This is a departure from distributing content designed originally for other media, such as cable or network or movies or DVD/Blu-Ray. This is a little like the break when cable began producing regular episodes on its own rather than re-distribute network shows.

This is not the first foray of Netflix into content creation. Between 2006 and 2008 Netflix had a hand in more than 100 films. Red Envelope Entertainment was a part of Netflix and invested in documentaries and low-budget films. But these were films distributed through normal channels.

In creating content for direct streaming however, Netflix is competing directly with the cable industry, but it may have to, as parts of the cable industry, such as HBO pull away from Netflix distribution by stopping discount prices for DVDs to Netflix. Netflix sees the cost as comparable to licensing a popular network show. The cost is rumored to be in the $100-million range. Netflix has about 20-million subscribers, HBO about 30-million.

Their first effort is a minimum 26 episodes of "House of Cards" with Kevin Spacey in the starring role and scheduled for streaming in 2012. Netflix is not actually the creator on this one instead they are licensing it earlier than usual. Netflix is taking first distribution before offering it to anyone else. This is to be followed closely a deal Netflix worked with Fox and Imagine Television to fund new episodes of "Arrested Development" scheduled for early in 2013. "Arrested Development" was canceled in 2006 but has a loyal fan base and afterlife.

In 2007 we saw a precursor to a full series streamed on the internet when "Sanctuary" was released on the web as a series of 15 or 20 minute bi-weekly "webisodes." After the webisodes proved a web success the SyFy network picked them up as a 13-episode cable series in 2008. "Sanctuary" is still running. The webisodes were, at that point, moved to Hulu.com's Sci Fi network.

The current marketing scheme for movies assumes 10 to 15 years worth of sales, starting with the initial release, then DVD sales, television (cable and broadcast) sales and sales to foreign markets. With DVD sales already dropping and expected to drop 11-percent in 2009, and Blu-Ray not picking up the loss, the value of any movie is now changing.

In 2008 the global market for film entertainment (movies) was $88.9-billion. 68-percent of that ($53.3-billion) came from home video sales and rentals (some VHS tape and most DVD). Online rentals and streaming amounted to $3.89-billion dollars or 4.4% of the overall film entertainment market. Before 2006 that was closer to zero.

TiVo set top box Roku HDMI WiFi
The device in front is not a USB stick despite its appearance at first glance. This is a Roku set-top WiFi box reduced in size with an HDMI jack at the end ready to plug into your TV set's HDMI port. It is designed to stream video to your TV using your WiFi connections
The back end of a TiVo with NetFlix download capability. Note the Ethernet port to download video from the internet and the HDMI connector to play the video on an HDTV. The TiVo box still has the cable connectors and regular connectors for other video players. The other device is a remote control for the Roku device. It also works on the Roku cube.

YouTube is one of the pioneers in internet video delivery. Netflix originally started out using the web to take orders for rental DVDs delivered to your door. In 2006 Netflix began delivering movies via internet download and streaming. Netflix made (and is continuing to make) arrangements with a number of hardware vendors, makers of HDTVs and set-top boxes, to download and stream content directly to the DVD/Blu-Ray player box, TiVo box or the television directly. For now, at least, NetFlix are keeping their physical delivery of disks to your door but the download method is advancing. The first HDTV sets with direct internet connectors are on the market as of spring 2009.

Outfits like Netflix, Vimeo, Revision3, Blip.tv are generating distribution mechanisms using the internet to deliver content. And new media sites, such as "The Wrap" http://www.thewrap.com are reporting on new media, as well as the break-up and changes in "old" media.

Delivery of media via download
http://www.youtube.com
http://revision3.com
http://blip.tv
http://vimeo.com
http://www.netflix.com

Information about media
http://www.thewrap.com
http://iwantmedia.com


Comparing Analog to Digital Recordings

Digital Media Completely Changed Transport and Storage

Film cans and film reelFilm transport case

 

In previous years each medium not only looked or sounded different, it also physically handled differently. Some things we hung on the wall or used a slide projector for. Movies used a film projector. Video came to us on television. Audio was not a file, it was on plastic discs, or on audio tape or on radio or was part of a video or a movie.

Entirely separate machines handled each type of media. Even transportation was handled using different types of packaging and sometimes even specialized shipping agencies. Using a number of these machines together was considered multimedia. It was common in the 60s to see banks of slide projectors synchronized with audio tape in "multimedia" presentations (picture at top).

Today's digital world converts all of those items to the same delivery form, ones and zeros, comprising "data." Pictures, movies, sound, text, remote controls for power plants, navigation data and more can all be reduced to digital form. The digital form can then be reconstituted anywhere at any time from the same ones and zeros.

  • Analog To Digital Convertor
    • The input is analog
    • The input gets measured
    • The measurement is stored in digital form as a data file of some sort
  • Digital Data File
    • Can simply be stored and/or distributed
    • Can be edited for length, content, enhancements or used with or in other files
    • Can be transfered via upload / download (no postage or other shipping)
  • Digital to Analog Convertor
    • The measurements stored in the data file are read
    • Those measurements are used to construct an analog signal which duplicates closely the original analog input


When an picture is copied using analog methods that picture loses subtleties and information, eventually becoming almost useless. The reason is that even the first generation (the original) is only an approximate match to all the actual tones and colors in the scene. The reproduction introduces more errors, though we normally accept this as representative, even accurate. Each succeeding copy of the picture (when the camera takes a picture of the picture) just makes the situation worse.
Photo by Mike Strong Sept 2007 - getting small dancer outfitted for an upcoming performance of Grupo Folklorico Atotonilco.
NOTE: This is not an actual analog copy, just an illustration created in Photoshop to approximate the look of an analog copy process.

The patterns of ones and zeros can be duplicated again and again with no errors in final appearance. Unlike analog copies which introduce mis-match errors, analog-to-digital convertors essentially take measurements of the original analog curves and record those measurements in digital form.

The measurements are used to reconstitute the original curve with no apparent loss (following the initial measurement).

Analog copy processes introduced errors, called generational losses. Each copy of a copy introduced still more generational loses. It used to be common to look at a picture or hear a dubed tape and note that this must be several generations down the line because of the degree to which it seems degraded.

The introduction of analog-to-digital convertors made it possible to exactly duplicate video, pictures, sound and more without generational loss (except for inaccuracies in the original sampling measurement). Once it is in digital form the files can be copied exactly, and any number of times or number of generations removed from the original sampling.

Our eyes and ears are still analog and are unable to recognize and interpret digital data. So, to see or hear the file we use digital-to-analog convertors. The visual and aural output is not so different from before digital computers.

Input   Output

Movies
Video
Audio
Photographs
Text
Web Pages
Mixed Media
Midi Controls

Digital Transmission --->>>
<<<---Digital Interactive Response

patterns of ones and zeros which represent information.
Movies
Video
Audio
Photographs
Text
Web Pages
Mixed Media
Midi Controls
 

Previously the only way to deliver media was physical.

  • Movies were delivered via film copies
  • Video and audio were delivered on tapes or vinyl discs
  • Photographs on paper.
  • Text on paper.
  • Mixed media in an art show and so forth.
Each copy was slightly different from its predecessor.

Now, the media performance is saved in digital files. The files contain information which allows a new copy of the original to be reproduced almost exactly.

Even the levers and other controls in railroad switch yards, power stations, ocean liners and your car radio volume control are replaced by instructions in patterns of ones and zeros.

 

How Accurate is a Digital Recording?

That depends on several factors

  • How many measurements, called samples, are taken of the picture or the sound
  • For sound the more samples per second the closer to the original (think of it as granularity)
  • For pictures the more pixels per picture the higher the detail
  • For both pictures and sound, the more the "bit depth" the finer (more precise) the measurement
  • For cameras the in-camera software has total control over how the image information collected by the imaging sensor or sensors will look in the output file. The same sensor in different cameras can look very different depending on each manufacturer's software engineers.
  • For audio how high levels of sound are handled (limited) makes a huge difference. Digital signals which are too hot sound far worse than analog signals which are too hot because digital recording measurements stop abruptly (cut very sharply) at a certain level unless the limiter circuits manage to smooth out the pulses. Some recorders store two files, one at a designated level and the other at some set number of decibels under that level (i.e. 6 db lower) so that if the first recording goes too hot you can substitute sound from the lower decibel recording file.

In addition some other items need to be noted, although these are equivalent for both analog and digital recording tools.

  • For sound the better the microphone the better the likeness of the measurement. 20 hz to 20,000 hz (20khz) is considered a typical full range although vocal mics usually stop at 15,000 hz.
    • Although voices and even instruments don't normally use these frequencies. Most top out less than 5,000 hz while most are below although the highest keys on the right end of a piano keyboard are above 16,000 hz. Even for instruments whose base frequencies are very low, their harmonics go much higher, well into the top end of the 20,000 hz range.
    • Sometimes you don't want the best mic in terms of sensitivity. Sometimes a junk mic gives more useful sound by not including everything.
  • For pictures the better the lenses and so forth the better the image
  • In audio amplifiers the quality of the electronic circuit determines much of the accuracy of the measurement
  • The conditions of the recording (light, sound, room, etc.) all affect the information being digested.
Audio Sampling Rate

The more measurements you make of a "signal" in a particular unit of time the more accurately you can track the sound or the image.

For sound files we are usually making 44,100 or more measurements every second. (common sampling rate settings are: 8,000 - 11,025 - 22,050 - 32,000 - 44,100 - 48,000 - 88,200 and 96,000 samples every second) CDs are made with a sampling rate of 44,100 samples per second.

Note that 20,000 hertz (cycles per second) is considered to be the frequency above which humans are no longer able to hear. A criterion called the "NyQuist" frequency requires sampling rates at least twice the maximum frequency. So that would be at least 40,000 samples, except that we don't have a 40k rate. We do have a 44,100 sampling rate which we use for CDs.

Extra-Info Note - Where These Specifications Come From:

Note: This is similar in concept to the way 24 frames per second was developed, not a fps but at feet per minute and not because of aesthetic considerations but because of practical (engineering) considerations, in the moment, given what was already on the shelf. See the item above under "Sacred 24."

44.1 kbps wasn't just a fudge factor to make sure we are at more than 40 kbps. Nor did it come from aesthetic judgements about a sampling rate and sound quality. 44.1 kbps was the result of using video recording equipment, repurposed, to make the first CDs. Existing audio equipment was not up to digital-sampling storage needs but existing video tape offered enough data width.

The available bandwidth of earlier audio tape recorders was too small to store sound with the required accuracy. Digital audio needed at least 1mb of bandwidth. Also, hard drive storage was too small at the time to handle the file sizes.

However, existing analog video tape recorders were able to handle the audio by recording digital audio data in place of analog picture information. Because picture data is held in raster-scan lines the number of raster lines in a picture frame, the number of frames per second, the number of fields per frame and the number of samples per scan line per left and right channels go into the digital sampling formula.

On video recorders designed for a PAL frame (25 fps, Europe and 294 usable scan lines per field [of 625/2]) we are looking at:
50-hertz x 294 lines x 3 = 44.1 khz

On video recorders designed for an NTSC frame (30 fps, USA and 245 usable scan lines per field [of 525/2]) we are looking at:
60-hertz x 245 lines x 3 = 44.1 khz

Each frame of video is divided into two (2) interlaced "fields." So, where did we get a 3 rather than a 2? Three is the number of samples for each stereo channel (3 for left, 3 for right) on each line of the video raster scan to ensure that we get more than 40,000 Khz. The audio data was recorded as binary levels (representing digital 1s and 0s), using the analog picture track for audio data signals rather than video analog signals. When played back the binary data (as digital 1s and 0s) was converted to analog sound.

http://en.wikipedia.org/wiki/Nyquist_frequency
http://en.wikipedia.org/wiki/Compact_disc
http://en.wikipedia.org/wiki/PAL
http://en.wikipedia.org/wiki/NTSC

Megapixels

For pictures even consumer point and shoot cameras commonly record at least 10 to 16-million or more individual points in a picture, each of which has a mesurement made of the amount of red, and green and blue at its location. The larger the picture the more it can be used for large prints on paper or other substrate.

Bit Depth

This is the fineness of each measurement in terms of how many tonal-scale levels per sample. The greater the bit depth the more levels are distinguished with each measurement. Although eight bits (256 possible levels) is the most common output in a JPEG photo file the cameras which produce that file start with a 12-14-bit sensor reading and reduce it to eight-bits when creating the file. Most cameras record in more bits per pixel, 14-bit files are the most common camera files and have 64 added levels for each of the levels in an 8-bit output.

The output is limited in what it can show. Output is always in 8-bits (256 tonal levels per color (RGB)) on a monitor, as seen by our eyes (actually a little less than 8 bits) and in a typical JPG file (with some exceptions).

A RAW file has more detail in the image than can be shown in a JPG or on a monitor. A RAW editor allows you to pull extra detail out of shadows and highlights by extending the otherwise unsee-able tonal-level detail into the smaller 8-bit space where you can see it.

Color Sub Sampling in Video Files

Viewing video color usually involves a bit of a head fake. "Color sub sampling" is a descriptor telling us how much color information is contained in a video file. As a means of reducing (and compressing) data in a video file for easier handling most video is recorded in 4:2:0 sub sampling. Better video (more color information) is at 4:2:2 and best would be 4:4:4. This is not the same as bit depth.

Side note: The more color information available the better the final picture is and the better "Chromakeying" works (those green screen or blue screen backgrounds which are removed electronically to substitute another picture behind the action.

The simplest explanation is to see what the sub-sample numbers describe. There are more "techy" descriptions but this is what they mean.
1 - the first number (4) tells us to consider a 4-pixel area as our sample with in a video frame. The two following numbers are for the rows of 4.
2 - the second number is how many pixels in the top row of our sample have color information
3 - the third number is how many pixels in the bottom row of our sample have color information

Notice that we are talking about 8 pixels in our sample. In 4:2:0 only 2 pixels in the first row and no pixels in the bottom row have color information
In 4:2:2, two of the 4 pixels in each row have color information. In 4:4:4 all eight pixels have color information.
In all cases each pixel carries brightness (tonal / grayscale) information. Each of those pixels has 8-bit information (exceptions for cinema cameras producing RAW files).

c   c  
       
c   c  
  c   c
c c c c
c c c c
Video recorded as 4:2:0
When viewed or edited, the 3 pixels around each RGB color pixel are assigned the color of that pixel
Video recorded as 4:2:2
When viewed or edited, the next pixel, right or left of an RGB color pixel is assigned the color of that pixel.

Video recorded as 4:4:4
All pixels get full RGB values
(red, green, blue)
Any still camera does this.

g r
b g
In a Bayer filter pattern this is the basic unit of four pixels, 2 green, 1 red and 1 blue
g r g r
b g b g
g r g r
b g b g

Put together, a sensor looks like this with lots of basic units of four pixels
(millions more than in this illustration)

Note, that in some cameras, one of the green filters is an emerald green, extending the green range.
There are numerous other filter patterns but the pattern above is the most common.

In a "Bayer" pattern sensor (almost all sensors) all pixels measures directly only one of the three primary colors but each pixel (location) in the image file is assigned all three colors. Each pixel's color in the image file is determined by combining the actual filter color on the physical sensor with pixel colors from the surrounding (physical) pixel sites.

So, for example, the red pixel (in the file) gets that location's red reading and the green and blue readings for its location come from extrapolating readings from the green and blue pixels around it.

Likewise the green gets the green reading at its location and the red and blue readings from pixels around it. The same idea with the blue pixel location. So in the saved image file, each pixel carries a red and a blue and a green value.

Note that there are two Green pixels and one red and one blue pixel. Green gets 2 pixels to approximate our own eyes which are about twice as sensitive to green. The green channel is also used for the primary grayscale information.

 

Digital Data: Is it Archival?

Yes and No.

  • Information on disks and other storage media does degrade.
  • Being in digital format does not keep it from degrading.
  • When digital data becomes unreadable it is really gone.

  • When analog data deteriorates it can still be heard or seen, just not as clearly.
  • Digital data needs to be re-copied to newer storage periodically otherwise it will be lost.
  • And that assumes a mechanism remains to read the old media.

  • An irony is that professional formats, which are always changing for the newest technology, are the worst to keep up with. Old files and old tape formats rapidly run into problems of retrieval because the machines and software which can read them become hard to find and maintain. So, a lot of old files and old tapes are simply lost. In my experience the longer-lasting formats tend to be the "prosumer" formats. But they are eventually not safe either.
  • Paper, remains, often for centuries, and needs no special hardware or software to access. Paper (non-acid) remains the best archival medium.

Digital data is able to be copied exactly because the pattern is more important the shape of the wave forms which make up the pattern, unlike analog storage in which the height and shape of the wave form IS the information.

Original Wave Form
Degraded Wave Form
Re-Constituted Wave Form
When digital data is freshly recorded the wave forms which represent ones and zeros are sharply defined.

After a while the stored wave forms tend to loose their clear definitions, on the media, but they stay in the same place. They retain the pattern of ones and zeros.

When the degraded forms are read from disk their patterns are re-constituted by the computer into the sharply defined wave-form shapes.

As long as the pattern is still detectable the information can be re-consituted without error.

But there is a limit.

As the signal eventually degrades, the first problems will be data errors. In video this often looks like "sparklies" in the pictures - little blips of light in random locations of the frame.

If the signal degrades further, the data just plain stops. At that point the data is usually gone unless more sophisticated data-recovery methods are used. That is usually expensive requiring specialist "magic" although even then it is not always assured.

The illustrations above represent what happens to wave forms which represent digital information. When digital wave forms degrade they still contain information as long as the computer can read the pattern of ones and zeros.
(Note: These pictures are not actual measured wave forms, just an illustration.)

This concept is similar to Morse code in which patterns of dots, dashes and timing represent letters and numbers. Morse code is a binary format, either on or off. A letter at the sending station is converted to electrical pulses which are sent down the line to another station where the pulses are re-constituted into a "letter."

What are the patterns which represent information?

These patterns can be used to represent numbers. The numbers in turn can be used to represent text characters (i.e. the number 65 is used to represent a capital "A").

  • Bits (the ones and zeros) are arranged in sets of eight.
  • Each bit is either ON (one) or OFF (zero).
  • Normally we represent On as a one and off as a zero (1 or 0).
  • Each set of eight bits is called a byte.
  • But let's go back to that set of eight bits, the byte.
  • In any eight bits you can make up to 256 distinct patterns.

Each pattern is looked at as a number from zero through 255, which makes 256 possible values (patterns).
We start with a value of zero (00000000), then one (00000001), then two (00000010), then three (00000011), then four (00000100), then five (00000101) and so forth until we get to 255 (11111111).

Numerals
8-bits Binary Pattern
How The On bits are put together to form a value
=Value
0
00000000
0
0
1
00000001
1
1
2
00000010
2 + 0
2
3
00000011
2 + 1
3
4
00000100
4 + 0 + 0
4
5
00000101
4 + 0 + 1
5
6
00000110
4 + 2 + 0
6
and so forth
...
...
...
255
11111111
128 + 64 + 32 + 16 + 8 + 4 + 2 + 1
255

(Note. The right-most bit can be one or zero. The next bit to the left can be two or zero. Another one to the left is four or zero. Then eight or zero. Then sixteen or zero. Then 32 or 0. Then 64 or zero and finally 128 or zero.)

These patterns, viewed as numbers can represent text characters. Remember, there are 256 possible patterns available in a byte (eight bits). The ASCII (American Standard Code for Information Interchange) assigns specific uses for each pattern (as a number) in a byte when used for letters and some commands. Here is a small sampling.

ASCII Number
8-bit Binary Pattern
Text Character
9 00001001 [horizontal tab] carriage (or cursor) goes to next column
10 00001010 [line feed] paper moves up (cursor moves down)
13 00001101 [carriage return] carriage (or cursor) goes to left margin
Note: when you press the [Enter] key on your key board you get a character string 10 (line feed) and a character string 13 (carriage return) and sometimes just one of those two characters, depending on the system.
On a typewriter this started a new line.
On a computer this starts a new paragraph (in word processors),
a new line (in text editors),
a new field (in databases),
submits a form (in web pages)
and so forth.
These are command codes in ASCII
64
01000000
@
65
01000001
A
47
00110001
/
48
00110010
0
96
01100000
a
34
00100010
"

Sampling measures the amplitude of a wave form many times each second and records the measurements in the digital file. Later, these measurements are used to construct (re-constitute) that waveform for playback in an analog form.

For this course it isn't important how well you understand bits and bytes mechanically. It is only important you understand that bits and bytes allow information to be retained in a manner which can be reproduced more exactly than any analog methods.

Please note: the Self-Review Quiz below is only for the very last section of this page. It is the original archive for the original page, which is the very last section on this page. A very large amount was added to the top of this page as an update. For now, I am not redoing the quiz. But it still serves as an example of an in-page quiz using javascript.


Self-Review Quiz

Analog copy processes introduced errors, called losses. Each copy of a copy introduced still more loses. It used to be common to look at a picture or hear a dubed tape and note that this must be several generations down the line because it looks (or sounds) so awful.

analog-to-digital convertors essentially take of the original analog curves and record those measurements in digital form.

Our eyes and ears are still and are unable to recognize and interpret digital data. So, to see or hear the file we use digital-to-analog convertors.

How Accurate is a Digital Recording? It depends on::
How many measurements (samples) per second (sound/video)
For pictures the more pixels per picture
bit depth
sunspot activity
lens quality
microphone quality

is the fineness of each measurement, the resolution or how many levels per sample

As long as the is still detectable digital information can be re-consituted without error.

Note: This is only scored from the page for your own reference. It doesn't connect with your student grades.