Empathy Through Technology Inclusive Digital Future Humane Nonprofit Technology Innovation for Social Change

Laura Lemus Joins WebServes Board

We are very pleased to announce the newest addition to our Board of Directors:

Laura Lemus, Nonprofit Partnerships Manager at monday.com, joins WebServes.

Laura is a proud Mexican immigrant with over 12 years of experience, passion, and commitment to nonprofit organizations, social justice issues, funder engagement, and philanthropy.

Today, she leverages her nonprofit expertise to serve nonprofits across the U.S. through the monday.com for Nonprofits Program. 

Laura has already engaged with ED and Board members with informed attention and enthusiasm.

Te recibemos con gusto.

Share:

Leave a Reply

Your email address will not be published. Required fields are marked *

On Key

Related Posts

Social Engineering Awareness

Cybercriminals don’t always rely on technical hacking to break into systems. Instead, they often take advantage of human psychology—using deception, manipulation, and persuasion to trick people into giving away sensitive information. This is known as social engineering, and it’s one of the most effective ways attackers gain access to personal data, company systems, and even financial accounts.

Laura Lemus Joins WebServes Board

We are very pleased to announce the newest addition to our Board of Directors:

Laura Lemus, Nonprofit Partnerships Manager at monday.com, joins WebServes.

Laura is a proud Mexican immigrant with over 12 years of experience, passion, and commitment to nonprofit organizations, social justice issues, funder engagement, and philanthropy.

Robot checking regulations

What Can Our Laws Do About AI?

As AI becomes more powerful and influential, laws and regulations must evolve to keep up. While AI brings many benefits, it also raises concerns about accountability, bias, misinformation, and ethical use. So, what role should laws play in governing AI?

Who Takes Responsibility for AI?

As artificial intelligence (AI) becomes more powerful and integrated into our daily lives, a key question arises: who is responsible when AI makes a mistake? Accountability is a significant concern, whether it’s biased decision-making, spreading misinformation, job automation, or even ethical dilemmas in self-driving cars. Who should take the blame when AI fails or causes harm?