Translate into a different language

Sunday, June 25, 2017

Suggested Books of the Week 25

Check these books out below by Willem Witteveen, has been a researcher into the Great Pyramid for decades, Fred Lang, has been teaching in both the traditional bricks and mortar classroom, LaRon A. Scott, assistant professor with the School of Education and Colleen A. Thoma , nationally recognized leader in the area of self-determination and transition planning.

Photo: GraphicStock.com

The Great Pyramid of Giza: A Modern View on Ancient Knowledge

The Great Pyramid of Giza: 
A Modern View on Ancient Knowledge
After extensive research Dutch researcher Willem Witteveen connects data from his own findings with that of other researchers, resulting in groundbreaking conclusions about the true function of the Great Pyramid and its place in history. The Greek mathematician Pythagoras stated: “All is number.” The American prophet Edgar Cayce claimed: “Sound is the medicine of the future,” and the late Egyptian wisdomkeeper Abd ‘el Hakim Awyan always said: “It is all about sound.” Willem Witteveen proves that all these men were right and that what we now regard as groundbreaking and new often originated in ancient Egypt, well before the first Egyptian dynasties. This book is divided into five main parts, four of which relate to the four earthly elements Earth, Water, Air and Fire.

The fifth part relates to the element or quintessence called Aether. Within this circle all processes on Earth and beyond take place, and the element Aether represents the divine world and is the carrier of all information. 
Read more...

How to Teach in a Virtual Classroom 


Place bookstore
using our online order form.
How to Teach in a Virtual Classroom is a practical guide for anyone teaching in a virtual classroom. The concepts and ideas presented will dramatically elevate an instructor’s teaching skills in this unique environment. These best practices enable you to fine-tune your teaching skills so that you will become comfortable and confident teaching in a virtual classroom environment. Learning institutions will learn what support systems need to be installed to maintain this new revenue stream. A list of instructional best practices will provide you with an easy reference guide as you navigate your instructional duties of teaching virtual classes. Ground-breaking research was conducted by the author with both instructors and students around the world who either have taught or have taken virtual classroom courses.
  • How to create a virtual classroom
  • Enhance your instructor skills for the virtual classroom
  • The critical support systems required
  • Tips that will greatly enhance an instructor’s skills in the virtual classroom
  • A list of instructional best practices 
Read more...

Place bookstore
using our online order form.
Universal Design for Distance Education: A Guide for Online Course Development   

Universal Design for Distance Education: A Framework for Delivering Online Courses was developed to introduce novice and seasoned profes­sionals who wish to embark on distance education course and program design to a framework known as universal design for learning. One of the main goals of this book is to highlight the model we developed for designing an online course and provide a roadmap for professionals seeking to design individual courses or who wish to build distance education programs.

Each chapter focuses on distance education, and how universal design can be applied to a specific aspect of the design and delivery of distance education courses and programs. Universal Design for Distance Education: A Framework for Deliver­ing Online Courses will help audiences improve the design and delivery of distance education coursework. We hope that this book will serve as a “how-to” for building effective distance education courses and programs.
Read more...


Source: Ancient Origins and XanEdu


If you enjoyed this post, make sure you subscribe to my Email Updates!

New e-book: A President's Perspective | XanEdu

Photo: Jeff Drabant
Place bookstore
using our online order form.
XanEdu is pleased to announce the publication
of a unique e-book, 'A President's Perspective', a digital textbook on the topic of adminstration within higher education by Dr. Jay Gogue (Immediate Past President of Auburn University), inform Jeff Drabant, Higher Education Marketing Manager at XanEdu, Inc..

Price and ordering information is here.
About the Book
A President’s Perspective, featuring Dr. Jay Gogue, immediate past president of Auburn University, offers practical insights into the challenges of the university presidency drawn from his long career in administration.

This digital textbook covers all the fundamental areas of higher education administration. In addition to Gogue’s observations, each chapter of the textbook also includes commentary from veteran administrators Gretchen Bataille and Robert Moulton, who served as consultants for the project.

Additional commentary from other high-level administrators at institutions across the country appears throughout the text.
A President’s Perspective is designed to serve as the principal text for higher education administration courses or as a supplementary source tailored to specific subject areas.
The first of its kind—this digital text is a practical guide to the realities of administration.

This e-book is designed to be used in whole or in part; faculty may choose individual chapters for their classes or even elements of the individual chapters. Elements of the text are easily adaptable for use in leadership programs and seminars as well. A President’s Perspective is a cost-effective learning tool and is priced well below the cost of a standard textbook.

More about the book, including a video sample, can be found on online at www.presidentiallearning.org

 

Table of Contents
Chapter 1: Introduction
Chapter 2: Governance
Chapter 3: Faculty
Chapter 4: Town and Gown Relations
Chapter 5: Diversity
Chapter 6: Student Affairs
Chapter 7: Athletics
Chapter 8: Risk Compliance
Chapter 9: Governmental Relations
Chapter 10: Advancement
Chapter 11: Presidential Notes

Read more...

About the Author

Photo: Jay Gogue

Dr. Jay Gogue is the featured lecturer for this course.

Gogue was president of Auburn University from 2007 until his retirement in 2017. Before returning to his alma mater, he was president of the University of Houston and chancellor of the University of Houston system, and president of New Mexico State University. Earlier in his career as an administrator, he was vice president for research and vice president/vice provost for agriculture and natural resources at Clemson University and provost at Utah State University.

Read more...

Enjoy your reading! 

Source: XanEdu


If you enjoyed this post, make sure you subscribe to my Email Updates!

Breaking the barriers of education with virtual classrooms | BrainCert Academy

Virtual classrooms stand poised to transform the way individuals and educational institutions go about content delivery. With the sustained interest displayed by tech developers and stakeholders alike, it is expected that in the not so distant future virtual classrooms would be the standard for worldwide content delivery.


"Gone are the days when a simple board meeting required the physical presence of all involved parties" summarizes BrainCert Academy.

A feat that is not only stressful to the participants but in most cases potentiates a drop in productivity, since they have to navigate from their distinct geographical locations to the venue of such events.

Welcome to the future where scheduled online classrooms, events and meetings are organized with the full participation of all relevant parties regardless of their geographical locations through the use of online virtual classrooms.

Even before the advent of such virtual communications tech, traditional methods of simultaneous global interaction such as conference calls and online streaming channels afforded organizations (and groups) the opportunity to interact with individuals in remote locations. The problem was that these routes of communication were most at times laced with inefficiencies. Virtual communication tech, improves on this traditional model by ridding the system of these inefficiencies and incorporating accessory tech features that allow for an endless possibility of uses. Perhaps the most applicable and widespread use of such virtual communications technology is its use in the scheduling of online virtual classrooms.

Virtual Classrooms 
A virtual classroom is a real-time simulation of an online learning environment. In such an environment (which could be web or software based), participants are fully immersed and communicate seamlessly with the teacher/instructor and other students, in the same way as they would in real life.
Read more... 

Recommended Reading
 

5 Tips to Deliver Effective Live Training Through Virtual Classrooms by BrainCert Academy.

Source: BrainCert Academy (Blog)


If you enjoyed this post, make sure you subscribe to my Email Updates!

Know the Difference Between AI, Machine Learning, and Deep Learning | Edgy Labs (blog)

AI is defined by many terms that crop up everywhere and are often used interchangeably. Read through to better know the difference between AI, Machine Learning, and Deep Learning.


Photo: Ktsdesign | shutterstock.com
Photo: Zayan Guedim
"Artificial Intelligence is, locally, a computer algorithm tasked with solving input problems based on accessible data and operational parameters, with respect to the amount of computational power available to the algorithm. More generally, AI is the name given to machine intelligence." inform Zayan Guedim, Author at Edgy Labs.

With the vast field of AI are specific concepts like machine learning and deep learning.

In the same way as Russian Matryoshka dolls where the small doll is nested inside the bigger one, each of the three segments (Deep Learning, ML and AI) is a subset of the other. Advances in these three technologies are already revolutionizing many aspects of modern life, and although very much related, they are not the same.

In this post, we’ll begin with the biggest doll “AI” and work our way down to the smallest.
 
Know the Difference Between AI, Machine Learning, and Deep Learning:
Read more... 

Source: Edgy Labs (blog)


If you enjoyed this post, make sure you subscribe to my Email Updates!

The Real Threat of Artificial Intelligence | New York Times - Sunday Review

Photo: Kai-Fu Lee
"It’s not robot overlords. It’s economic inequality and a new global order" notes Kai-Fu Lee, chairman and chief executive of Sinovation Ventures, a venture capital firm, and the president of its Artificial Intelligence Institute.

Photo: Rune Fisker

Too often the answer to this question resembles the plot of a sci-fi thriller. People worry that developments in A.I. will bring about the “singularity” — that point in history when A.I. surpasses human intelligence, leading to an unimaginable revolution in human affairs. Or they wonder whether instead of our controlling artificial intelligence, it will control us, turning us, in effect, into cyborgs.

These are interesting issues to contemplate, but they are not pressing. They concern situations that may not arise for hundreds of years, if ever. At the moment, there is no known path from our best A.I. tools (like the Google computer program that recently beat the world’s best player of the game of Go) to “general” A.I. — self-aware computer programs that can engage in common-sense reasoning, attain knowledge in multiple domains, feel, express and understand emotions and so on.

This doesn’t mean we have nothing to worry about. On the contrary, the A.I. products that now exist are improving faster than most people realize and promise to radically transform our world, not always for the better. They are only tools, not a competing form of intelligence. But they will reshape what work means and how wealth is created, leading to unprecedented economic inequalities and even altering the global balance of power.

It is imperative that we turn our attention to these imminent challenges.

What is artificial intelligence today? Roughly speaking, it’s technology that takes in huge amounts of information from a specific domain (say, loan repayment histories) and uses it to make a decision in a specific case (whether to give an individual a loan) in the service of a specified goal (maximizing profits for the lender). Think of a spreadsheet on steroids, trained on big data. These tools can outperform human beings at a given task.

This kind of A.I. is spreading to thousands of domains (not just loans), and as it does, it will eliminate many jobs. Bank tellers, customer service representatives, telemarketers, stock and bond traders, even paralegals and radiologists will gradually be replaced by such software. Over time this technology will come to control semiautonomous and autonomous hardware like self-driving cars and robots, displacing factory workers, construction workers, drivers, delivery workers and many others.

Unlike the Industrial Revolution and the computer revolution, the A.I. revolution is not taking certain jobs (artisans, personal assistants who use paper and typewriters) and replacing them with other jobs (assembly-line workers, personal assistants conversant with computers). Instead, it is poised to bring about a wide-scale decimation of jobs — mostly lower-paying jobs, but some higher-paying ones, too.

This transformation will result in enormous profits for the companies that develop A.I., as well as for the companies that adopt it. Imagine how much money a company like Uber would make if it used only robot drivers. Imagine the profits if Apple could manufacture its products without human labor. Imagine the gains to a loan company that could issue 30 million loans a year with virtually no human involvement. (As it happens, my venture capital firm has invested in just such a loan company.)

We are thus facing two developments that do not sit easily together: enormous wealth concentrated in relatively few hands and enormous numbers of people out of work. What is to be done?
Read more...

Source: New York Times


If you enjoyed this post, make sure you subscribe to my Email Updates!

Is Artificial Intelligence Overhyped in 2017? by Quora, Contributor | HuffPost

AI over-hyped in 2017? originally appeared on Quora - the place to gain and share knowledge, empowering people to learn from others and better understand the world.




Answer by Joanne Chen, Partner at Foundation Capital, on Quora:

To quote Bill Gates “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten. Don’t let yourself be lulled into inaction.”

In short, over the next ten years, I don’t believe AI will be overhyped. However, in 2017, will all of our jobs be automated away by bots? Unlikely. I believe the technology has incredible potential and will permeate across all aspects of our lives. But today, my sense is that many people don’t understand what the state of AI is, and thus contribute to hype.

So what can AI do today?

Artificial intelligence, a concept dating back to the 50s, is simply the notion that a machine can performance tasks that require human intelligence. But AI today is not what the science fiction movies portray it to be. What we can do today falls in the realm of narrow AI (vs general intelligence), which is the idea that machines can perform very specific tasks in a constrained environment. With narrow AI, there are a variety of techniques that you may have heard of. I’ll use examples to illustrate differences.

Let’s say you want to figure out my age (which is 31).

1) Functional programming: what we commonly know as programming, a way to tell a computer to do something in a deterministic fashion. I tell my computer that to compute my age, it needs to solve AGE = today’s date – birth date. Then I give it my birth date (Dec 4, 1985). There is 0% chance the computer will get my age wrong.

2) Machine learning: an application of AI where we give machines data and let them learn for themselves to probabilitically predict an outcome. The machine improves its ability to predict with experience and more relevant data. So take age for example. What if I had 1,000 data sets of people’s ages and song preferences? Song preference is highly correlated with generation. For example, Led Zeppelin and The Doors fans are mostly 40+ and Selena Gomez fans are generally younger than 25. Then I could ask the computer given that I love the Spice Girls and Backstreet Boys, how old does it think I am? The computer then looks at these correlations and compares it with a list of my favorite songs to predict my age within x% probability. This is a very simple example of using machine learning..

3) Deep Learning: is a type of machine learning emerged in the last few years, and talked widely about in the media when Google DeepMind’s AlphaGo program defeated South Korean Master Lee Se-dol in the board game Go.

Deep learning goes a step further than ML in that it enables the machine to learn purely by providing examples. In contrast, ML requires programmers to tell the computer what it should look for. As a result, deep learning functions much more like the human brain. This especially works well with applications like image recognition.
Read more...

Source: HuffPost


If you enjoyed this post, make sure you subscribe to my Email Updates!

Beware the Hype of Artificial Intelligence | Fortune - Tech

Photo: Jonathan Vanian
"Artificial intelligence has made great strides in the past few years, but it’s also generated much hype over its current capabilities. That’s one takeaway" reports Jonathan Vanian, writer at Fortune with a focus on technology.

Photo: Getty Images

That’s one takeaway from a Friday panel in San Francisco involving leading AI experts hosted by the Association for Computing Machinery for its 50th annual Turing Award for advancements in computer science.

Michael Jordan, a machine learning expert and computer science professor at University of California, Berkeley, said there is “way too much hype” regarding the capabilities of so-called chat bots. Many of these software programs use an AI technique called deep learning in which they are “trained” on massive amounts of conversation data so that they learn to interact with people.

But despite several big tech companies and new startups promising powerful chat bots that speak like humans when prodded, Jordan believes the complexity of human language it too difficult for bots to master with modern techniques like deep learning. These bots essentially perform parlor tricks in which they respond with comments that are loosely related to a particular conversation, but they “can’t say anything true about the real world.”

“We are in era of enormous hype of deep learning,” said Jordan. Deep learning has the potential to change the economy, he added, but “we are not there yet."

Also in the panel, Fei-Fei Li, Google’s (goog, +0.89%) machine learning cloud chief and Stanford University Professor, said “We are living in one of the most exciting and hyped eras of AI.” Li helped build the ImageNet computer-vision contest, which spurred a renaissance in AI in which researchers applied deep learning to identify objects like cats in photos.

But while everyone talks about ImageNet’s success, “we hardly talk about the failures,” she said, underscoring the hard work researchers have building powerful computers that can “see” like humans.
Read more... 

Source: Fortune


If you enjoyed this post, make sure you subscribe to my Email Updates!

Saturday, June 24, 2017

Six things to know about network connectivity in Africa | IDG Connect

Photo: Kathryn Cave
Kathryn Cave, Editor at IDG Connect summarizes, "A special summit collocated with Datacloud Europe 2017 addressed datacentres in Africa."

Photo: IDG Connect

Since the first undersea cables began to connect Africa in the early 2000s, a network of fibre has slowly grown to surround the continent. However, what exists at the edge does not necessarily make its way to the interior and this has resulted in extremely varied internet rates and costs.

The issue of network connectivity was discussed as part of Invest in Datacentre Africa, a bespoke summit collocated within Datacloud Europe 2017, early in June. The summary below attempts to highlight the important points.

Geo-political issues always raise their ugly head. It can be hard to talk about Africa without getting lost in a minefield of mixed meanings. Sub-Saharan countries get lumped together because these are all served by the same cables. North African countries are often not included in discussions about Africa at all because they’re served by a different set of cables. And there can be tendency to ignore Francophone countries altogether and focus exclusively on English speaking ones (although, admittedly, not so much in the networking space).
Read more... 

Source: IDG Connect


If you enjoyed this post, make sure you subscribe to my Email Updates!

Give Teachers Credit: They Know Learning Is Social | EdSurge

Follow on Twitter as @spirrison
"The enthusiasm shared by educators who understand that social media will forever impact their lives and practice is very reminiscent of the vibe expressed by dot-commers two decades ago during the first wave of the Internet boom—this is a very good thing." says Brad Spirrison, Senior Director at Participate.

Photo: Rawpixel.com / Shutterstock
I’ve served as both a journalist and participant within each movement. My job is to interview and survey the pioneers, investors and stakeholders who drive technological change, share their stories, and collaborate with very smart people to build and distribute tools that help everyone else get involved.

The parallels between the early days of the world wide web and today’s edtech scene are surreal. First, you have your tinkerers who recognize the network potential of organizing information, resources and advice around communities. In the nineties, this included Geocities, Lycos and Jerry’s Guide to the World Wide Web (later called Yahoo!). More recently, communities and directories including #edchat, eduClipper and Cybraryman (AKA Jerry Blumengarten’s guide to educational websites and chats) provided voice, structure and inspiration to educators looking to connect and collaborate in ways never before possible.

As more individuals organically buy into the movement, a second layer of investors, opportunists and outright charlatans get involved. In the nineties, I literally wrote half a dozen stories analyzing the hundreds of millions of dollars invested in the online pet foods space. Virtually all of those companies, along with thousands of other venture-backed outfits during that time, turned into doo-doo.

This is also a very good thing. Railroads, telephone networks and the internet could not have been built without financial and emotional excess. Whether you are an investor, participant or observer, the key amidst these periods is to recognize innovations that remain true to the underlying cause of whatever movements they spawn within. This means approaching the very individuals and organizations you want to serve, building trust, sharing stories and identifying what problem they wish to solve.

There is a lot of noise in edtech today, mostly coming from technology and consumer marketing-oriented companies. They are trying to cut and paste solutions they built for one industry and sell them to teachers and administrators because they feel the market is hot. This approach won’t work with passionate educators who recognize that their world is changing because of technology. They don’t have time for doo-doo.

Here’s what teachers are doing with their own time...

Brad Spirrison end his article with following: "We are never going back to how things used to be. Together, we have the opportunity to frame and define what’s next."
Read more...

Source: EdSurge


If you enjoyed this post, make sure you subscribe to my Email Updates!

This is your brain on PhD | Times Higher Education (THE) (blog)

Photo: Steven Franklin
Steven Franklin, visiting tutor in the history department and a PhD candidate at Royal Holloway, University of London lays bare the questions and doubts that go through his mind as he sits down to work on his thesis.

Photo: iStock

When you start a PhD, the first words you hear are: “It’s going to be hard.” As someone just starting out on an academic journey, your natural response “Pah! I’ll prove them all wrong, I’m the exception, not the rule.” But there's a reason they say these things – it’s because a PhD is difficult and sometime torturous too.

Thinking logically about the process, it shouldn’t be difficult at all. You have four years (eight if you’re doing it part-time), and so by my poor maths, it works out at roughly 65 words a day. Easy! We can all do that. I mean I’ve written more here already! Sadly, it’s not that simple – what a pity. That logic doesn’t factor in any time for conceptualising your idea into something achievable, the research, the manipulation of that research into argumentative prose and then the inevitable rewrites.

Still, let’s be generous and say 200 words a day for less than two years and your project will be complete. In fact, you’d have almost written two.

Of course, there are other pressures that every PhD student must deal with. There’s an expectation for us to take some baby steps into the world of academia. We must present our work at seminars and conferences. Get used to our work being criticised and come back stronger from that. After all, no piece of work is ever the finished article. No one, to my knowledge, is yet to write the last word on any piece of history – although there are plenty of academics who’d be disturbed by the thought of their word not being the last.

Conferences are another way to introduce yourself to the academic world. Make a name for yourself. Socialise in the correct circles. These are the people that might one day examine you, become colleagues or write you a reference. We need to make the most of these exchanges. At the end of the day our future depends on it.

Then, if you’re like me, you don’t have funding, and so you must work to make ends meet. Mummy and daddy might be able to support you, but this 28-year-old would prefer some form of independence. I may be a student but I refuse to be seen as the stereotype. I work, undoubtedly more than I should, and I do my work well. One finds that if you work hard and do it to a high enough standard more doors open. People see your use. Before you know it you have an invite to the department Christmas meal. Not a bad achievement given you were employed on a short-term basis to help with some admin.

Factoring in those things, I’m now needing to write in the region of 400 words a day. Thinking about it, maybe a little less. It’s still achievable. Isn’t it? Well of course it is.

But our list does not end there. If you’d like to get anywhere in academia it’s desirable that you've taught. Published an article or two prior to thesis submission. Write a few academic book reviews. These, sadly, suck time. Time we, perhaps just I, do not have.

Let's also pause for a moment to reflect on the poker game that you play with your PhD peers. It’s an unspoken truth, but academia is essentially a game of “my fish is bigger than yours”. It’s not necessarily about quality of produced work. It’s all about quantity. The more you have, the better you are. What “have” can be anything, too. Scholarly works, academic prizes, research scholarships and media contributions are all ways of physically displaying that you’re on your way to greatness.

PhD students play the game as well as anybody else. Why blame them? The very nature of the profession dictates that you must sell yourself at every possible moment and be opportunistic, too. Don't get me wrong. I love my PhD peers but there are times when the game gets tiresome.

So, where am I left now? Ah yes, 500 words a day over 200 days and the job’s done!
Read more... 

Source: Times Higher Education (THE) (blog) 


If you enjoyed this post, make sure you subscribe to my Email Updates!